Search Results

Search found 9696 results on 388 pages for 'proxy authentication'.

Page 328/388 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • What is auto-mounting my media volume?

    - by user285277
    Something is repeatedly mounting my "media" share, doing something with it, then quietly un-mounting it with no notifications at the user level. from the little I can gleaned from the console messages below, I thought I'd managed to stop it, if not understand it last night when I followed instructions for deleting all traces of the Google Update Daemon. I've not been using any Google apps whatsoever, so I was surprised to see that in Console. What's more surprising, and perhaps a little distressing, is that the same thing occurred this evening, when the Google Daemon is long gone. I don't have that log because I can't recall precisely what time it occurred. I'll do a search and provide it if you wish, though. In the meantime, any help with this would be extremely appreciated. I've asked over at Apple Discussions but I think it might be a little deeper than those manning the boards this weekend are comfortable with. It's certainly beyond my meager skills. With apologies in advance if this is more lines thank you need. Please note that I've transformed the Google links a little because the forum here requires more reputation points before one can post more than two links. 12/27/13 10:47:31.000 PM kernel[0]: memorystatus_thread: idle exiting pid 53629 [distnoted] 12/27/13 10:48:10.433 PM com.apple.Preview.TrustedBookmarksService[53640]: Failed to resolve bookmark data at index: 0; not stale; error: The file doesn’t exist. 12/27/13 10:48:10.434 PM com.apple.Preview.TrustedBookmarksService[53640]: Failed to resolve bookmark data at index: 1; not stale; error: The file doesn’t exist. 12/27/13 10:48:10.950 PM com.apple.SecurityServer[17]: Session 103257 created 12/27/13 10:48:34.328 PM com.apple.Preview.TrustedBookmarksService[53640]: Failed to resolve bookmark data at index: 2; not stale; error: The file doesn’t exist. 12/27/13 10:48:34.000 PM kernel[0]: AFP_VFS afpfs_mount: /Volumes/Media Archive-1, pid 53641 12/27/13 10:48:34.000 PM kernel[0]: AFP_VFS afpfs_mount : succeeded on volume 0xffffff80d6355008 /Volumes/Media Archive-1 (error = 0, retval = 0) 12/27/13 10:49:32.000 PM kernel[0]: wlEvent: en0 en0 Link DOWN virtIf = 0 12/27/13 10:49:32.000 PM kernel[0]: AirPort: Link Down on en0. Reason 8 (Disassociated because station leaving). 12/27/13 10:49:32.000 PM kernel[0]: en0::IO80211Interface::postMessage bssid changed 12/27/13 10:49:33.681 PM configd[16]: network changed: v4(en0-:10.0.1.12) DNS- Proxy- SMB 12/27/13 10:49:33.697 PM configd[16]: network changed: DNS* Proxy 12/27/13 10:49:35.475 PM KernelEventAgent[57]: tid 00000000 received event(s) VQ_NOTRESP (1) 12/27/13 10:49:35.000 PM kernel[0]: ASP_TCP Disconnect: triggering reconnect by bumping reconnTrigger from curr value 0 on so 0xffffff802176b4a0 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect started /Volumes/Media Archive-1 prevTrigger 0 currTrigger 1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: doing reconnect on /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: posting to KEA EINPROGRESS for /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: Max reconnect time: 600 secs, Connect timeout: 15 secs for /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 1 seconds and then try again 12/27/13 10:49:35.479 PM KernelEventAgent[57]: tid 00000000 type 'afpfs', mounted on '/Volumes/Media Archive-1', from '//Me@Capsule._afpovertcp._tcp.local/Media%20Archive', not responding 12/27/13 10:49:35.487 PM KernelEventAgent[57]: tid 00000000 found 1 filesystem(s) with problem(s) 12/27/13 10:49:36.692 PM com.bourgeoisbits.cloak.agent[14503]: NetworkProfile: (null), (null), (null) (Connected: NO, Airport: NO, Open: NO) [trusted] 12/27/13 10:49:36.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:36.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:36.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 2 seconds and then try again 12/27/13 10:49:38.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:38.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:38.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 4 seconds and then try again 12/27/13 10:49:41.000 PM kernel[0]: CODE SIGNING: cs_invalid_page(0x1000): p=53662[GoogleSoftwareUp] clearing CS_VALID 12/27/13 10:49:42.102 PM GoogleSoftwareUpdateDaemon[53663]: -[KeystoneDaemon logServiceState] GoogleSoftwareUpdate daemon (1.1.0.3659) vending: com.google.Keystone.Daemon.UpdateEngine: 2 connection(s) com.google.Keystone.Daemon.Administration: 0 connection(s) 12/27/13 10:49:42.113 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine updateProductID:] KSUpdateEngine updating product ID: "com.google.Keystone" 12/27/13 10:49:42.116 PM GoogleSoftwareUpdateDaemon[53663]: -[KSCheckAction performAction] KSCheckAction checking 1 ticket(s). 12/27/13 10:49:42.121 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction starting update check for ticket(s): {( <KSTicket:0x531870 productID=com.google.Keystone version=1.1.0.3659 xc=<KSPathExistenceChecker:0x5302d0 path=/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/> serverType=Omaha url=htt[PeeEs]://tools.google.com/service/update2 creationDate=2012-08-12 14:47:10 > )} Using server: <KSOmahaServer:0x534340 engine=<KSDaemonUpdateEngine:0x52e530> params={ EngineVersion = "1.1.0.3659"; ActivesInfo = { "com.google.talkplugin" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; }; "com.google.Keystone" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; LastActivePingDate = 2013-10-06 07:00:00 +0000; LastActiveDate = 2013-12-28 03:49:42 +0000; }; "com.google.picasa" = { LastActiveDate = 2012-08-29 10:15:42 +0000; }; }; UserInitiated = 0; IsSystem = 1; OmahaOSVersion = "10.8.5_i486"; Identity = KeystoneDaemon; AllowedSubdomains = ( ".omaha.sandbox.google.com", ".tools.google.com", ".www.google.com", ".corp.google.com" ); } > 12/27/13 10:49:42.130 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction running KSServerUpdateRequest: <KSOmahaServerUpdateRequest:0x1a31a90 server=<KSOmahaServer:0x534340> url="htt[PeeEs]://tools.google.com/service/update2" runningFetchers=0 tickets=1 activeTickets=1 rollCallTickets=1 body= <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <o:gupdate xmlns:o="htt[Pee]://www.google.com/update2/request" protocol="2.0" version="KeystoneDaemon-1.1.0.3659" ismachine="1"> <o:os platform="mac" version="MacOSX" sp="10.8.5_i486"></o:os> <o:app appid="com.google.Keystone" version="1.1.0.3659" lang="en-us" installage="502" brand="GGLG"> <o:ping r="82" a="82"></o:ping> <o:updatecheck></o:updatecheck> </o:app> </o:gupdate> > 12/27/13 10:49:42.291 PM GoogleSoftwareUpdateDaemon[53663]: -[KSOutOfProcessFetcher(PrivateMethods) helperDidTerminate:] The Internet connection appears to be offline. [NSURLErrorDomain:-1009] 12/27/13 10:49:42.291 PM GoogleSoftwareUpdateDaemon[53663]: -[KSServerUpdateRequest(PrivateMethods) fetcher:failedWithError:] KSServerUpdateRequest fetch failed. (productIDs: com.google.Keystone) [com.google.UpdateEngine.CoreErrorDomain:702 - 'htt[PeeEs]://tools.google.com/service/update2'] (The Internet connection appears to be offline. [NSURLErrorDomain:-1009]) 12/27/13 10:49:42.292 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction(PrivateMethods) finishAction] KSUpdateCheckAction found updates: {( )} 12/27/13 10:49:42.295 PM GoogleSoftwareUpdateDaemon[53663]: -[KSPrefetchAction performAction] KSPrefetchAction no updates to prefetch. 12/27/13 10:49:42.295 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSSilentUpdateAction had no updates to apply. 12/27/13 10:49:42.296 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSPromptAction had no updates to apply. 12/27/13 10:49:42.299 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine(PrivateMethods) updateFinish] KSUpdateEngine update processing complete. 12/27/13 10:49:42.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:42.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:42.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 8 seconds and then try again 12/27/13 10:49:43.152 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine updateAllProducts] KSUpdateEngine updating all installed products. 12/27/13 10:49:43.153 PM GoogleSoftwareUpdateDaemon[53663]: -[KSCheckAction performAction] KSCheckAction checking 2 ticket(s). 12/27/13 10:49:43.158 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction starting update check for ticket(s): {( <KSTicket:0x18367a0 productID=com.google.Keystone version=1.1.0.3659 xc=<KSPathExistenceChecker:0x1837e10 path=/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/> serverType=Omaha url=htt[PeeEs]://tools.google.com/service/update2 creationDate=2012-08-12 14:47:10 >, <KSTicket:0x1834750 productID=com.google.talkplugin version=4.7.0.15362 xc=<KSPathExistenceChecker:0x1833890 path=/Library/Application Support/Google/GoogleTalkPlugin.app> serverType=Omaha url=htt[PeeEs]://tools.google.com/service/update2 creationDate=2012-08-12 14:47:10 > )} Using server: <KSOmahaServer:0x52e930 engine=<KSDaemonUpdateEngine:0x52e530> params={ EngineVersion = "1.1.0.3659"; ActivesInfo = { "com.google.talkplugin" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; }; "com.google.Keystone" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; LastActivePingDate = 2013-10-06 07:00:00 +0000; LastActiveDate = 2013-12-28 03:49:42 +0000; }; "com.google.picasa" = { LastActiveDate = 2012-08-29 10:15:42 +0000; }; }; UserInitiated = 0; IsSystem = 1; OmahaOSVersion = "10.8.5_i486"; Identity = KeystoneDaemon; AllowedSubdomains = ( ".omaha.sandbox.google.com", ".tools.google.com", ".www.google.com", ".corp.google.com" ); } > 12/27/13 10:49:43.159 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction running KSServerUpdateRequest: <KSOmahaServerUpdateRequest:0x53a210 server=<KSOmahaServer:0x52e930> url="htt[PeeEs]://tools.google.com/service/update2" runningFetchers=0 tickets=2 activeTickets=1 rollCallTickets=2 body= <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <o:gupdate xmlns:o="htt[Pee]://www.google.com/update2/request" protocol="2.0" version="KeystoneDaemon-1.1.0.3659" ismachine="1"> <o:os platform="mac" version="MacOSX" sp="10.8.5_i486"></o:os> <o:app appid="com.google.Keystone" version="1.1.0.3659" lang="en-us" installage="502" brand="GGLG"> <o:ping r="82" a="82"></o:ping> <o:updatecheck></o:updatecheck> </o:app> <o:app appid="com.google.talkplugin" version="4.7.0.15362" lang="en-us" installage="502" brand="GGLG"> <o:ping r="82"></o:ping> <o:updatecheck></o:updatecheck> </o:app> </o:gupdate> > 12/27/13 10:49:43.243 PM GoogleSoftwareUpdateDaemon[53663]: -[KSOutOfProcessFetcher(PrivateMethods) helperDidTerminate:] The Internet connection appears to be offline. [NSURLErrorDomain:-1009] 12/27/13 10:49:43.243 PM GoogleSoftwareUpdateDaemon[53663]: -[KSServerUpdateRequest(PrivateMethods) fetcher:failedWithError:] KSServerUpdateRequest fetch failed. (productIDs: com.google.Keystone, ... (2)) [com.google.UpdateEngine.CoreErrorDomain:702 - 'htt[PeeEs]://tools.google.com/service/update2'] (The Internet connection appears to be offline. [NSURLErrorDomain:-1009]) 12/27/13 10:49:43.244 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction(PrivateMethods) finishAction] KSUpdateCheckAction found updates: {( )} 12/27/13 10:49:43.247 PM GoogleSoftwareUpdateDaemon[53663]: -[KSPrefetchAction performAction] KSPrefetchAction no updates to prefetch. 12/27/13 10:49:43.248 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSSilentUpdateAction had no updates to apply. 12/27/13 10:49:43.248 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSPromptAction had no updates to apply. 12/27/13 10:49:43.250 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine(PrivateMethods) updateFinish] KSUpdateEngine update processing complete. 12/27/13 10:49:45.570 PM GoogleSoftwareUpdateDaemon[53663]: -[KeystoneDaemon logServiceState] GoogleSoftwareUpdate daemon (1.1.0.3659) vending: com.google.Keystone.Daemon.UpdateEngine: 1 connection(s) com.google.Keystone.Daemon.Administration: 0 connection(s) 12/27/13 10:49:50.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:50.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:50.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 10 seconds and then try again 12/27/13 10:49:53.828 PM KernelEventAgent[57]: tid 00000000 unmounting 1 filesystems 12/27/13 10:49:53.000 PM kernel[0]: AFP_VFS afpfs_unmount: /Volumes/Media Archive-1, flags 524288, pid 57 12/27/13 10:49:54.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: get the reconnect token 12/27/13 10:49:54.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: GetReconnectToken failed 32 /Volumes/Media Archive-1 12/27/13 10:49:54.000 PM kernel[0]: AFP_VFS afpfs_unmount : afpfs_DoReconnect sent signal for unmount to proceed 12/27/13 10:50:12.104 PM GoogleSoftwareUpdateDaemon[53663]: -[KeystoneDaemon main] GoogleSoftwareUpdateDaemon inactive, shutdown. 12/27/13 10:50:29.396 PM Dock[93157]: no information back from LS about running process

    Read the article

  • AWS VPC ELB vs. Custom Load Balancing

    - by CP510
    So I'm wondering if this is a good idea. I have a Amazon AWS VPC setup with a public and private subnets. So I all ready get the Internet Gateway and NAT. I was going to setup all my web servers (Apache2 isntances) and DB servers in the private subnet and use a Load Balancer/Reverse Proxy to pick up requests and send them into the private subnets cluster of servers. My question then, is Amazons ELB's a good use for these, or is it better to setup my own custom instance to handle the public requests and run them through the NAT using nginx or pound? I like the second option just for the sake of having a instance I can log into and check. As well as taking advantage of caching and fail2ban ddos prevention, as well as possibly using fail safes to redirect traffic. But I have no experience with their ELB's, so I thought I'd ask your opinions. Also, if you guys have an opinion on this as well, would using the second option allow me to only have 1 public IP address and be able to route SSH connections through port numbers to respective instances? Thanks in advance!

    Read the article

  • Why is 32-bit-mode required in IIS7.5 for my app?

    - by Jonas Lincoln
    I have a .net4 web application running in a 64 bits 2008 server. I can only get it to run when I set the app pool to Enable 32-bits application to true. All dlls are compiled for .net4 (verified with corflags.exe). How can I figure out why Enable 32-bit is required? The error message from the event log when starting as a 64-bit app-pool Event code: 3008 Event message: A configuration error has occurred. Event time: 2011-03-16 08:55:46 Event time (UTC): 2011-03-16 07:55:46 Event ID: 3c209480ff1c4495bede2e26924be46a Event sequence: 1 Event occurrence: 1 Event detail code: 0 Application information: Application domain: removed Trust level: Full Application Virtual Path: removed Application Path: removed Machine name: NMLABB-EXT01 Process information: Process ID: 4324 Process name: w3wp.exe Account name: removed Exception information: Exception type: ConfigurationErrorsException Exception message: Could not load file or assembly 'System.Data' or one of its dependencies. An attempt was made to load a program with an incorrect format. at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) at System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() at System.Web.Configuration.AssemblyInfo.get_AssemblyInternal() at System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) at System.Web.Compilation.BuildManager.CallPreStartInitMethods() at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters, PolicyLevel policyLevel, Exception appDomainCreationException) Could not load file or assembly 'System.Data' or one of its dependencies. An attempt was made to load a program with an incorrect format. at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.Load(String assemblyString) at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) Request information: Request URL: "our url" Request path: "url" User host address: ip-adddress User: Is authenticated: False Authentication Type: Thread account name: "app-pool" Thread information: Thread ID: 6 Thread account name: "app-pool" Is impersonating: False Stack trace: at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) at System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() at System.Web.Configuration.AssemblyInfo.get_AssemblyInternal() at System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) at System.Web.Compilation.BuildManager.CallPreStartInitMethods() at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters, PolicyLevel policyLevel, Exception appDomainCreationException) Custom event details:

    Read the article

  • Juniper’s Network Connect ncsvc on Linux: “host checker failed, error 10”

    - by hfs
    I’m trying to log in to a Juniper VPN with Network Connect from a headless Linux client. I followed the instructions and used the script from http://mad-scientist.us/juniper.html. When running the script with --nogui switch the command that gets finally executed is $HOME/.juniper_networks/network_connect/ncsvc -h HOST -u USER -r REALM -f $HOME/.vpn.default.crt. I get asked for the password, a line “Connecting to…” is printed but then the programm silently stops. When adding -L 5 (most verbose logging) to the command line, these are the last messages printed to the log: dsclient.info state: kStateCacheCleaner (dsclient.cpp:280) dsclient.info --> POST /dana-na/cc/ccupdate.cgi (authenticate.cpp:162) http_connection.para Entering state_start_connection (http_connection.cpp:282) http_connection.para Entering state_continue_connection (http_connection.cpp:299) http_connection.para Entering state_ssl_connect (http_connection.cpp:468) dsssl.para SSL connect ssl=0x833e568/sd=4 connection using cipher RC4-MD5 (DSSSLSock.cpp:656) http_connection.para Returning DSHTTP_COMPLETE from state_ssl_connect (http_connection.cpp:476) DSHttp.debug state_reading_response_body - copying 0 buffered bytes (http_requester.cpp:800) DSHttp.debug state_reading_response_body - recv'd 0 bytes data (http_requester.cpp:833) dsclient.info <-- 200 (authenticate.cpp:194) dsclient.error state host checker failed, error 10 (dsclient.cpp:282) ncapp.error Failed to authenticate with IVE. Error 10 (ncsvc.cpp:197) dsncuiapi.para DsNcUiApi::~DsNcUiApi (dsncuiapi.cpp:72) What does host checker failed mean? How can I find out what it tried to check and what failed? The HostChecker Configuration Guide mentions that a $HOME/.juniper_networks/tncc.jar gets installed on Linux, but my installation contains no such file. From that I concluded that HostChecker is disabled for my VPN on Linux? Are the POST to /dana-na/cc/ccupdate.cgi and “host checker failed” connected or independent? By running the connection over a SSL proxy I found out that the POST data is status=NOTOK (Funny side note: the client of the oh-so-secure VPN does not validate the server’s SSL certificate, so is wide open to MITM attacks…). So it seems that it’s the client that closes the connection and not the server.

    Read the article

  • linux keeps disconnecting from wireless network

    - by Matteo Ceccarello
    I'm running Arch Linux on an Acer laptop and my wirless connection doesn't stay up. After a while it disconnects, and when I try to reconnect I get stuck with a "Waiting for authorization" message. I have to retry several times before getting the connection stay up for few minutes. This happens with both networkmanager and wicd. The strange thing is that the iMac that sits next to the laptop connects fine, and when I use my laptop within the university wireless network it works normally. How can I solve this problem? EDIT: I've tried to connect manually following the steps iwlist wlan0 scan wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf dhcpcd wlan0 and it works, I can ping google. However, looking to wpa supplicant output I see that it keeps connecting and disconnecting. I'm using WPA2, and this seems to be a problem in authentication. EDIT 2: as pointed out in the answers I forgot to mention my hardware/software specifications: kernel: Linux 3.0-ARCH wireless card: # lspci | grep -i net 07:00.0 Network controller: Intel Corporation WiFi Link 5100 module used # lsmod | grep -i 80211 mac80211 216021 1 iwlagn I use a Netgear DGN1000 modem/router My dmseg output is shown here http://pastebin.com/8Tf7iage

    Read the article

  • Set proper rights for sshfs mountpoint so it can be shared with samba

    - by CS01
    I have a domain hoster that provides access via SSH. My platforms are: Gentoo 2.6.36-r5 Windows (XP/Vista/7) I work on my Windows, I use Gentoo to do all the magic Windows can't do. Therefore I use sshfs to mount the remote public directory for my domain to /mnt/mydomain.com. Authentication is done via keys, so lazy me don't have to type in my password every now and then. Since I do my coding on Windows, and I don't want to upload/download the changed files all the time, I want to access this /mnt/mydomain.com via a samba share. So I shared /mnt in samba, all mounts except mydomain.com is listed on my Windows Explorer. My theories are: sshfs does not set the mountpoint uid/gid to something that samba expects samba does not know that it has to include the uid/gid that /mnt/mydomain.com has been set. All above is wrong, and I don't know. Here are configs and output from console, need anything else just let me know. Also no errors or warnings that I take notice of being relevant to this issue, but I might be wrong. gentoo ~ # ls -lah /mnt total 20K drwxr-xr-x 9 root root 4.0K Mar 26 16:15 . drwxr-xr-x 18 root root 4.0K Mar 26 2011 .. -rw-r--r-- 1 root root 0 Feb 1 16:12 .keep drwxr-xr-x 1 root root 0 Mar 18 12:09 buffer drwxr-s--x 1 68591 68591 4.0K Feb 16 15:43 mydomain.com drwx------ 2 root root 4.0K Feb 1 16:12 cdrom drwx------ 2 root root 4.0K Feb 1 16:12 floppy drwxr-xr-x 1 root root 0 Sep 1 2009 services drwxr-xr-x 1 root root 0 Feb 10 15:08 www /etc/samba/smb.conf [mnt] comment = Mount points writable = yes writeable = yes browseable = yes browsable = yes path = /mnt /etc/fstab sshfs#[email protected]:/home/to/pub/dir/ /mnt/mydomain.com/ fuse comment=sshfs,noauto,users,exec,uid=0,gid=0,allow_other,reconnect,follow_symlinks,transform_symlinks,idmap=none,SSHOPT=HostBasedAuthentication 0 0 For an easier read: [email protected] /home/to/pub/dir/ /mnt/mydomain.com/ options: comment=sshfs noauto users exec uid=0 gid=0 allow_other reconnect follow_symlinks transform_symlinks idmap=none SSHOPT=HostBasedAuthentication Help!

    Read the article

  • .NET 2.0 Application now running slow on IIS 7.5

    - by Valien
    I recently moved (and still in testing) an application from a Windows 2003 Server (Physical box) running IIS 6.x to a Windows 2008 R2 Standard (VM) IIS 7.5 server. The application is a .NET framework 2.0 application and is running under a 2.0 App Pool. This site works great except for one thing: Takes forever to get a request back. I've been tracking it with Chrome Inspect Element and it queries the site and can take up to 45 seconds to answer. Now when it does the page(s) render instantly but it's that initial request that's killing it. I see no error logs or issues with the application or Windows Event Viewer or even IIS logs so not sure where to start looking next. Some new changes was that previously the app resided behind a Pix firewall and now is behind a larger network environment in a DMZ zone (and I believe NetScaler is also being used to manage the network). I do not have rights/abilities to look at the network itself but can contact the Data center folks to look deeper into this but I wanted to make sure it's not my application that might be causing the slowdown or IIS. In summary: .NET 2.0 application works great in IIS 6.x Application moved to an IIS 7.5 server and now slow on rendering but when it does render responds back with pages instantly. Edit for solution Found out that it was the SOAP calls that were slowing the site down. In the new datacenter my application cannot request SOAP calls and so they time out after 40-45 seconds or so. Now trying to find out if I can install a proxy server to redirect this...

    Read the article

  • Internet explorer rejects cookies in kerberos protected intranet sites

    - by remix_tj
    I'm trying to build an intranet site using joomla. The webserver is using HTTP Kerberos authentication with mod_kerb_auth. Everything works fine, the users get authenticated and so on. But if i try to login to the administrator panel i can't because IE does not accept the needed cookies. No such problem with firefox. The intranet site is called "intranet_new" and is hosted by webintranet04, under the directory /var/www/vhosts/joomla/intranet_new/. I have my virtualhost for intranet_new containing this: <Location /> AuthType Kerberos AuthName "Kerberos Login" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms PROV.TV.LOCAL Krb5KeyTab /etc/apache2/HTTP.keytab require valid-user </Location> The same is for webintranet04 virtualhost, which is the default pointing to /var/www and contains: <Location /vhosts/joomla/> AuthType Kerberos AuthName "Kerberos Login" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms PROV.TV.LOCAL Krb5KeyTab /etc/apache2/HTTP.keytab require valid-user </Location> the very strange problem i have is that if i open http:// webintranet04/vhosts/joomla/intranet_new/administrator IE allows me to login, accepting cookie. If i open http:// intranet_new/administrator, instead, i loop on the login page. Last, intranet_new is a CNAME record of webintranet04. This is only an IE problem. I need: - the admin interface to work with IE - the "kerberized" zone to accept cookie, because i am deploying other programs requiring cookies.

    Read the article

  • How to send email from home ip when the email server isn't a designated outbound mail server allocated to BT Retail customers [on hold]

    - by Mr Shoubs
    (I am sys admin!) I can receive email, but when I try to send an email from my home office via our work email server I get the following reply: Your message did not reach some or all of the intended recipients. Subject: Test Sent: 19/08/2014 17:02 The following recipient(s) cannot be reached: 'Joe Blogs' on 19/08/2014 17:02 Server error: '554 5.7.1 Service unavailable; Client host [my-ip-here] blocked using zen.spamhaus.org; http://www.spamhaus.org/query/bl?ip=my-ip-here' I went to that URL and it says the following: Ref: PBL231588 81.152.0.0/13 is listed on the Policy Block List (PBL) Outbound Email Policy of BT Retail for this IP range: It is the policy of BT Retail that unauthenticated email sent from this IP address should be sent out only via the designated outbound mail server allocated to BT Retail customers. Please consult the following URL for details on how to configure your email client appropriately. http://btybb.custhelp.com/cgi-bin/btybb.cfg/php/enduser/cci/bty_adp.php?p_sid=fPnV4zhj&p_faqid=6876 Removal Procedure Removal of IP addresses within this range from the PBL is not allowed by the netblock owner's policy. Going to this URL just says: This site has been disabled for the time being. Does anyone know what I should do to allow me to send emails from my home ip - the site suggests I can configure my email client? (note that I have configured the client to use smtp authentication)

    Read the article

  • mysqld refusing connections from localhost

    - by Dennis Rardin
    My mail server (Ubuntu 10.04) uses mysql for virtual domains, virtual users. For some reason, mysqld has started refusing connections from localhost. I see these in the mail server log: Oct 6 00:31:14 apollo postfix/trivial-rewrite[16888]: fatal: proxy:mysql:/etc/postfix/mysql-virtual_domains.cf(0,lock|fold_fix): table lookup problem and: Oct 7 13:39:15 apollo postfix/proxymap[25839]: warning: connect to mysql server 127.0.0.1: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 I also get the following in auth.log: Oct 6 22:33:31 apollo mysqld[31775]: refused connect from 127.0.0.1 Telnet to the local port: root@apollo:/var/log/mysql# telnet localhost 3306 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. root@apollo:/var/log/mysql# I am not sure why this started happening, but there was a disk failure in a RAID 1 pair a bit earlier that day. So it's possible I have a damaged config file or something. But mail was working for at least an hour after the drive event, so who knows for sure? phpmyadmin works fine, and the databases themselves look like they're intact. I think/believe that selinux and iptables are disabled and not running. So ... why is mysqld refusing connections from localhost? What should I check? What processes might cause this if a .conf file or possibly a binary was damaged? Which other log files might contain clues? I've enabled "general logging" in /etc/mysql/my.cnf, but I get no interesting or informative entries there. Thanks, m00tpoint

    Read the article

  • Periodic internet connection drops

    - by sterlingholt
    My setup is a dsl modem, and a dlink di 524M router. I'm also using a Witopia VPN which runs through OpenVPN. I've been having trouble with the internet connection dropping very frequently. It comes back shortly, without even a router/modem/computer restart. This happens as frequently as every ten minutes. Occasionally (not often) it will last as long as an hour or two without dropping. When it drops, I can get it back almost immediately by clicking Reconnect in the OpenVPN GUI and letting that do it's thing. It's worth noting that I'm in China. Calling support is a bit difficult because of that. Also I don't really understand all of the router's software, although I've got it generally figured out. I've tried a bunch of stuff, attempts to diagnose and/or fix the problem. No success with any of the following: I've power cycled both the modem and the router. I've tried an ethernet connection to the router. I've connected without the VPN. I've disabled IEEE authentication on all connections. I've checked for viruses. I've tried lifting it off the ground so as to prevent overheating.

    Read the article

  • chrooted sftp user with write permissions to /var/www

    - by matthew
    I am getting confused about this setup that I am trying to deploy. I hope someone of you folks can lend me a hand: much much appreciated. Background info Server is Debian 6.0, ext3, with Apache2/SSL and Nginx at the front as reverse proxy. I need to provide sftp access to the Apache root directory (/var/www), making sure that the sftp user is chrooted to that path with RWX permissions. All this without modifying any default permission in /var/www. drwxr-xr-x 9 root root 4096 Nov 4 22:46 www Inside /var/www -rw-r----- 1 www-data www-data 177 Mar 11 2012 file1 drwxr-x--- 6 www-data www-data 4096 Sep 10 2012 dir1 drwxr-xr-x 7 www-data www-data 4096 Sep 28 2012 dir2 -rw------- 1 root root 19 Apr 6 2012 file2 -rw------- 1 root root 3548528 Sep 28 2012 file3 drwxr-x--- 6 www-data www-data 4096 Aug 22 00:11 dir3 drwxr-x--- 5 www-data www-data 4096 Jul 15 2012 dir4 drwxr-x--- 2 www-data www-data 536576 Nov 24 2012 dir5 drwxr-x--- 2 www-data www-data 4096 Nov 5 00:00 dir6 drwxr-x--- 2 www-data www-data 4096 Nov 4 13:24 dir7 What I have tried created a new group secureftp created a new sftp user, joined to secureftp and www-data groups also with nologin shell. Homedir is / edited sshd_config with Subsystem sftp internal-sftp AllowTcpForwarding no Match Group <secureftp> ChrootDirectory /var/www ForceCommand internal-sftp I can login with the sftp user, list files but no write action is allowed. Sftp user is in the www-data group but permissions in /var/www are read/read+x for the group bit so... It doesn't work. I've also tried with ACL, but as I apply ACL RWX permissions for the sftp user to /var/www (dirs and files recursively), it will change the unix permissions as well which is what I don't want. What can I do here? I was thinking I could enable the user www-data to login as sftp, so that it'll be able to modify files/dirs that www-data owns in /var/www. But for some reason I think this would be a stupid move securitywise.

    Read the article

  • Set proper rights for sshfs mountpoint so it can be shared with samba

    - by CS01
    I have a domain hoster that provides access via SSH. My platforms are: Gentoo 2.6.36-r5 Windows (XP/Vista/7) I work on my Windows, I use Gentoo to do all the magic Windows can't do. Therefore I use sshfs to mount the remote public directory for my domain to /mnt/mydomain.com. Authentication is done via keys, so lazy me don't have to type in my password every now and then. Since I do my coding on Windows, and I don't want to upload/download the changed files all the time, I want to access this /mnt/mydomain.com via a samba share. So I shared /mnt in samba, all mounts except mydomain.com is listed on my Windows Explorer. My theories are: sshfs does not set the mountpoint uid/gid to something that samba expects samba does not know that it has to include the uid/gid that /mnt/mydomain.com has been set. All above is wrong, and I don't know. Here are configs and output from console, need anything else just let me know. Also no errors or warnings that I take notice of being relevant to this issue, but I might be wrong. gentoo ~ # ls -lah /mnt total 20K drwxr-xr-x 9 root root 4.0K Mar 26 16:15 . drwxr-xr-x 18 root root 4.0K Mar 26 2011 .. -rw-r--r-- 1 root root 0 Feb 1 16:12 .keep drwxr-xr-x 1 root root 0 Mar 18 12:09 buffer drwxr-s--x 1 68591 68591 4.0K Feb 16 15:43 mydomain.com drwx------ 2 root root 4.0K Feb 1 16:12 cdrom drwx------ 2 root root 4.0K Feb 1 16:12 floppy drwxr-xr-x 1 root root 0 Sep 1 2009 services drwxr-xr-x 1 root root 0 Feb 10 15:08 www /etc/samba/smb.conf [mnt] comment = Mount points writable = yes writeable = yes browseable = yes browsable = yes path = /mnt /etc/fstab sshfs#[email protected]:/home/to/pub/dir/ /mnt/mydomain.com/ fuse comment=sshfs,noauto,users,exec,uid=0,gid=0,allow_other,reconnect,follow_symlinks,transform_symlinks,idmap=none,SSHOPT=HostBasedAuthentication 0 0 For an easier read: [email protected] /home/to/pub/dir/ /mnt/mydomain.com/ options: comment=sshfs noauto users exec uid=0 gid=0 allow_other reconnect follow_symlinks transform_symlinks idmap=none SSHOPT=HostBasedAuthentication Help!

    Read the article

  • Nginx wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • Dovecot authentification not working

    - by user1488723
    I run a Ubuntu 10.04 VPS with Postfix and Dovecot installed. For a while I had problems with the mailserver itself (Postfix) but now it runs ok. I can telnet into it from localhost (telnet localhost 25 while logged in) and Im blocked if I try to do it from the outside (telnet mail.example.org 25). This is as it should be according to my main.cf However when I try to log in using Dovecot (openssl s_client -connect mail.example.com:993) I'm allowed in but denied when trying to identify myself as a user: Excerpt from Dovecot log in: Key-Arg : None Start Time: 1341074622 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE AUTH=PLAIN AUTH=LOGIN] Dovecot ready. When I continue and try to log in to a specific user with the command: A001 login user password I get: A001 NO [AUTHENTICATIONFAILED] Authentication failed. I've reset the password to ensure it is correct and I know the user (user) exists on the system. When I do /etc/init.d/dovecot reload I get: /etc/init.d/dovecot: 29: maildir:~/Maildir: not found * Reloading IMAP/POP3 mail server dovecot [ OK ] Could it be that the mailboxes isn't found? Postfix main.cf: home_mailbox = Maildir/ mailbox_command = recipient_delimiter = + inet_interfaces = all smtpd_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_loglevel = 1 smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_sasl_auth_enable = yes smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination broken_sasl_auth_clients = yes smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_local_domain = $mydomain Dovecot.conf: protocols = imap imaps disable_plaintext_auth = no log_timestamp = "%b %d %H:%M:%S " ssl = yes ssl_cert_file = /etc/postfix/ssl/smtpd.crt ssl_key_file = /etc/postfix/ssl/smtpd.key mail_location = maildir:~/Maildir auth_verbose = yes mail_access_groups = mail auth_username_chars = abcdefghijklmnopqrstuvwxyz0123456789 protocol imap { imap_client_workarounds = delay-newmail tb-extra-mailbox-sep } auth default { mechanisms = plain login passdb pam { } userdb passwd { } socket listen { client { path = /var/spool/postfix/private/auth user = postfix group = postfix mode = 0660 } } }

    Read the article

  • can't ssh from mac to windows (running ssh server on cygwin)

    - by Denise
    I set up an ssh server on a fresh windows 7 machine using the latest version of cygwin. Disabled the firewall. I can ssh into it from itself, from a different windows box (using winssh), and from a linux vm. In spite of that, I tried to ssh in from two different macs, and neither would let me! This is the debug output: OpenSSH_5.1p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to 3dbuild [172.18.4.219] port 22. debug1: Connection established. debug1: identity file /Users/Denise/.ssh/identity type -1 debug1: identity file /Users/Denise/.ssh/id_rsa type 1 debug1: identity file /Users/Denise/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5 debug1: match: OpenSSH_5.5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '3dbuild' is known and matches the RSA host key. debug1: Found key in /Users/Denise/.ssh/known_hosts:43 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Trying private key: /Users/Denise/.ssh/identity debug1: Offering public key: /Users/Denise/.ssh/id_rsa Connection closed by [ip] It shows the same output, and fails at the same place, whether I have put my public key on the ssh server or not. Any help would be appreciated-- hopefully someone has run into this before?

    Read the article

  • ADSL Modem/Router sometimes hands out incorrect IP addresses

    - by Peter Keevill
    My setup is as follows:- Main ADSL modem / router (switch) configured as DHCP server with address range 192.168.0.25-60 The office machines are configured with fixed IP ( not in the same address pool of course ) and hard wired to this router. A wireless access point ( Router ) is connected to provide Internet access for guests in a separate area. This router is NOT configured as a DHCP server. Wireless authentication is turned off. IP address lease times are set to 4 hours. Sometimes guests are able to connect to the wireless access point but they are not given a valid IP. They get 169.x.x.x addresses. Rebooting their machines does not resolve the problem. The only way to resolve is to reboot the main ADSL/router which is often frustrating for other users who are successfully connected with valid IP and DG. The problem seems to occur more frequently to Apple/Mac guests although it also sometimes occurs with Win machines. I personally use Ubuntu on my Laptop and thus far, never have had any problem connecting and getting a valid IP address in the guest area. One further point of note which may give a clue is that certain guests ( always Apple/Mac ) get lease times of 90 days. However, this does not 'stack out' the number of available addresses and of course, rebooting the router clears them until the next time they login.

    Read the article

  • Spring-mvc project can't select from a particular mysql table

    - by Dan Ray
    I'm building a Spring-mvc project (using JPA and Hibernate for DB access) that is running just great locally, on my dev box, with a local MySQL database. Now I'm trying to put a snapshot up on a staging server for my client to play with, and I'm having trouble. Tomcat (after some wrestling) deploys my war file without complaint, and I can get some response from the application over the browser. When I hit my main page, which is behind Spring Security authentication, it redirects me to the login page, which works perfectly. I have Security configured to query the database for user details, and that works fine. In fact, a change to a password in the database is reflected in the behavior of the login form, so I'm confident it IS reaching the database and querying the user table. Once authenticated, we go to the first "real" page of the app, and I get a "data access failure" error. The server's console log gets this line (redacted): ERROR org.hibernate.util.JDBCExceptionReporter - SELECT command denied to user 'myDbUser'@'localhost' for table 'asset' However, if I go to MySQL from the shell using exactly the same creds, I have no problem at all selecting from the asset table: [development@tomcat01stg]$ mysql -u myDbUser -pmyDbPwd dbName ... mysql> \s -------------- mysql Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (i686) using readline 5.1 Connection id: 199 Current database: dbName Current user: myDbUser@localhost ... UNIX socket: /var/lib/mysql/mysql.sock -------------- mysql> select count(*) from asset; +----------+ | count(*) | +----------+ | 19 | +----------+ 1 row in set (0.00 sec) I've broken down my MySQL access settings, cleaned out the user and re-run the grant commands, set up a version of the user from 'localhost' and another from '%', making sure to flush permissions.... Nothing is changing the behavior of this thing. What gives?

    Read the article

  • SMTP message rate control on Ubuntu 8.04, preferably with postfix

    - by TimDaMan
    Maybe I am chasing a bug but I am trying to set up a smtp proxy of sorts. I have a postfix server which receives all the email for a collection of servers/clients. It them uses a smarthost (relayhost=...) to forward it's mail to our corporate MTA. I would like to limit the number of messages an individual server can relay to prevent swamping the corporate MTA. Postfix has a program called "anvil" that is capable of tracking stats about mail to be used for such things but it doesn't seem to be executed. I ran "inotifywait -m /usr/lib/postfix/anvil" while I started postfix and sent a number of messages through it from a remote server. inotifywait indicated anvil was never run. Anyone gotten postfix/anvil rate controls to work? main.cf smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no myhostname = site-server-q9 alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost relayhost = Out outgoing mail relay mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 10.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = 10.X.X.X smtpd_client_message_rate_limit = 1 anvil_rate_time_unit = 1h master.cf extract anvil unix - - - - 1 anvil smtp inet n - - - - smtpd

    Read the article

  • debugging connection to mysql from python script using MySQLdb

    - by timpone
    I am a python newbie and have a python 2.5 script that is using MySQLdb to connect on OS X 10.5.8. I haven't been able to succesfully connect to the database of interest with this. However, I am able to connect using php's mysqli and also via the mysql cli interface. I get the error: File "build/bdist.macosx-10.5-i386/egg/MySQLdb/connections.py", line 188, in __init__ _mysql_exceptions.OperationalError: (1045, "Access denied for user 'arc_development'@'localhost' (using password: YES)") On my linux box which has the same mysql perms, the script works fine logging in. On my OS X laptop, I am able to create a database named test_python which bypasses mysql authentication scheme. This makes me think that issues like 32bit / 64bit incompatabilities aren't occuring. If I turn on the query log, I get access denied: 100610 20:56:55 4 Connect Access denied for user 'arc_development'@'localhost' (using password: YES) I'm a little bit at a loss to what to do next. Is there any way I can specify in the general log or binary log to get the actual password set on the connection string? How about writing out from connections.py file the value (although not sure how I'd do that)? thanks

    Read the article

  • Moving my OpenID from Livejournal to... something else.

    - by T-Boy
    I've actually been an early user of OpenID, although there are still some questions that I've had with OpenID that I've never really had satisfactorily answered. Now, I understand that if I have full control over my domain, I can set it up so that I can delegate the task of authenticating to another OpenID service provider. The problem is, what I'd like to do is to get the Livejournal server to pass the authentication to someone else, instead of having LJ doing it. Preferably what I'd like to do is get Livejournal, when asked by a authenticating provider, say, "No, I don't do it anymore -- go to this address". The plan was that this address would then be in a domain I fully control, which then would pass it on to whichever service provider I choose. I don't even know if I've gotten my understanding of OpenID right, if all this shenanigans are necessary, if my question makes sense, or if it's even possible with a service provider like Livejournal. (tried tagging this with livejournal, and it told me I couldn't, because I don't have enough reputation. Oh well; one must start somewhere. Sorry for the inconvenience!)

    Read the article

  • outlook security alert after adding a second wireless access point to the network

    - by Mark
    Just added a Netgear WG103 Wireless Access Point in our conference room to allow visitors to access the internet through out internal network. When switched on visitors can connect to the intenet and everything works fine. Except, when the Access Point is switched on, normal users of the network get a Security Alert when they try to start Outlook 2007. The Security Alert is the same as the one shown in question 148526 asked by desiny back in June 2010 (http://serverfault.com/questions/148526/outlook-security-alert-following-exchange-2007-upgrade-to-sp2) rather than "autodiscover.ad.unc.edu" my security alert references our "Remote.server.org.uk". If I view the certificate it relates to "Netgear HTTPS:....", but the only Netgear equipment we have is the new Access Point installed in the conference room. If the Access Point is not switched on we do not get the Security Alert. At first I thought it was because we had selected "WPA-PSK & WPA2-PSK" Network Authentication Type but it continues to occur even if we opt for "Shared Key" WEP Data Encryption. I do not understand why adding a Netgear Wireless Access point would cause Outlook to issue a Security Alert when users try to read their email. Does anyone know what I have to do to get rid of the Security Alert? Thanks in advance for reading this and helping me out.

    Read the article

  • getent passwd fails, getent group works?

    - by slugman
    I've almost got my AD integration working completely on my OpenSUSE 12.1 server. I have a OpenSUSE 11.4 system successfully integrated into our AD environment. (Meaning, we use ldap to authenticate to AD directory via kerberos, so we can login to our *nix systems via AD users, using name service caching daemon to cache our passwords and groups). Also, important to note these systems are in our lan, ssl authentication is disabled. I am almost all the way there. Nss_ldap is finally authenticating with ldap server (as /var/log/messages shows), but right now, I have another problem: getent passwd & getent shadow fails (shows local accounts only), but getent group works! Getent group shows all my ad groups! I copied over the relavent configuration files from my working OpenSUSE 11.4 box: /etc/krb5.conf /etc/nsswitch.conf /etc/nscd.conf /etc/samba/smb.conf /etc/sssd/sssd.conf /etc/pam.d/common-session-pc /etc/pam.d/common-account-pc /etc/pam.d/common-auth-pc /etc/pam.d/common-password-pc I didn't modify anything between the two. I really don't think I need to modify anything, because getent passwd, getent shadow, and getent group all works fine on the OpenSUSE11.4 box. Attempting to restart nscd service unfortunately didn't do much, and niether did running /usr/sbin/nscd -i passwd. Do any of you admin-gurus have any suggestions? Honestly, I'm happy I made it this far. I'm almost there guys!

    Read the article

  • How to deploy website in IIS with a host name?

    - by Jayakumar
    I try to host my application in IIS. Below are the steps that I follow: Publish the code and place it in a path. Open IIS, right click on "sites" and select "Add Website". In that dialog I gave the site name and selected the app pool created for the application. I selected the physical path of the published code. I left the IP and port in the binding section without changes. and, finally, gave the host name as fus.km.com. When I try to browse the application the page is not Loading "Internet Explorer cannot display the Page" The machine domain is km.com UPDATE I tried to add the host name to the host file and flushed the DNS. The application asked for user credentials (I use windows Authentication in the application). But it did not login. On repeated tries it throws the error: HTTP Error 401.1 - Unauthorized You do not have permission to view this directory or page using the credentials that you supplied. I tried with different user to login but I get the same result.

    Read the article

  • White Screen, No Errors.

    - by GruffTech
    So.. Interesting problem for you guys, As I'm completely lost as to what to do, or where to take the next step. Server & Application Environment. CentOS release 5.3 (Final) Apache 2.2.3-22 EnableSendfile off EnableMMAP off ErrorLog logs/error_log LogLevel debug PHP-5.2.6-2 error_reporting = E_ALL display_errors = on log_errors = on max_execution_time=300 max_input_time=60 memory_limit=512mb Kohana 2.3 PHP Environment. HAProxy 1.3.15.6-2 MemCacheD 1.2.6-1 Our application is split between 3 web servers, mounting a NFS Storage server, and sticky load balancing between the 3 web servers. The application seemingly runs great, but every so often, instead of loading, the application just shows a pure white page. Not a 404 Error, or a 500 Server Error, a clean white page. And it returns instantly, so its not a execution time error. Nothing in the Error log, or Server-Error Log, Proxy log shows standard proxied connection, Just the standard 200-Status in Access log, with 256 bytes transferred. To me, this leads to tell me that the application itself is having a problem. A rare, unexplainable, seemingly random, problem that causes what we've now called the "White Screen of Death." Our developers all say that since there is nothing going to our error logs, that it must be a server problem. But I say the same thing, There's nothing going to ANY of our logs (relevent to this anyway), and we're not having httpd children crash from what i can tell. Any ideas on how i can increase my logs, or somehow prove that its not a bug in PHP, Apache, CentOS, ect? Or if it is somehow a bug, identify it?

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >