Search Results

Search found 34826 results on 1394 pages for 'valid html'.

Page 1196/1394 | < Previous Page | 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203  | Next Page >

  • Google respond differently to two identical nginx setups and 200 codes; any ideas?

    - by Yuji Tomita
    I'm rather confused... I have a linode.com VPS which has been cloned recently, so the settings are the same between nginx servers. One lives on a dev subdomain, one on a www. I'm trying to run a google experiment on my live server, which claims: Web server rejects utm_expid. Your server doesn't support added query arguments in URLs. My logs show on the dev server where it works: 74.125.186.32 - - [13/Sep/2012:13:33:45 -0700] "GET /product/iphone-case/?utm_expid=25706866-0 HTTP/1.1" 200 12521 "-" "Google_Analytics_Content_Experiments 74.125.186.32 - - [13/Sep/2012:13:33:45 -0700] "GET /product/iphone-case/?ab_reviews=True&utm_expid=25706866-0 HTTP/1.1" 200 14679 "-" "Google_Analytics_Content_Experiments My production server shows google making a second request. 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/?ab_reviews=on&utm_expid=25706866-1 HTTP/1.1" 200 12104 "-" "Google_Analytics_Content_Experiments 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/?utm_expid=25706866-1 HTTP/1.1" 200 12122 "-" "Google_Analytics_Content_Experiments 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/ <--- A second request for some reason. HTTP/1.1" 200 12522 "-" "Google_Analytics_Content_Experiments I'm not sure how google determines why it needs to send a second request without the querystring. The original request has clearly sent a 200 OK status response. Does anybody have any suggestions where to look next? The HTML (compared by diff) on the two pages is exactly the same.

    Read the article

  • CMS/Wiki to use for a HTML5 video site

    - by Clinton Blackmore
    Greetings. I want to put up a website with instructive screencasts, and allow for people to add comments to them. I would like use the Video for Everybody technique, partly because I dislike Flash and because it helps in a small way to move the web forward [while being backwards compatable]. I recognize that HTML5 is still in draft, and that support for it varies. I do have some hosting space, and can run Perl, PHP, and Ruby on Rails applications, with a MySQL backend. I should mention that part of my working job involves running some web servers, and that I am a programmer by training (with only a limited familiarity with Perl and PHP, and none with Ruby). I should mention why I don't particularly want to go with a video hosting site (like YouTube or Vimeo): Flash Video Resolution and Quality [I'd like to put up 800x600 videos] Videos promote a club that is not stricly non-profit [ie. may fall afoul of Terms of Service] I'm already paying for web hosting, and free video hosting comes with time and bandwidth limits I don't want there to be two locations where you can comment on the video Now, having said all that, I'd be quite comfortable putting up my own HTML pages, except: that's so web 1.0! :) [ie. it does not allow for comments] I also want to do some blogging and possibly put up a wiki; the site will not be entirely screencasts So, can anyone recommend a CMS (or Wiki, or similar application) that I can customise for this purpose?

    Read the article

  • Connecting a network printer via a Thecus N2100 - works in Vista, not in Windows 7

    - by Jon Skeet
    I have a Lexmark E250d printer attached to a Thecus N2100 NAS. On Windows Vista I've managed to configure this using an "Internet" printer port with the URL of http://thecus:631/printers/usb-printer. I can add a printer in a similar way in Windows 7, but it never manages to print the test page. If I go to "Configure Port" in Vista, it just has "Security Options" - on Windows 7 it's asking about Raw mode vs LPR mode etc. On Vista I'm using an E250d-specific driver from Lexmark; on Windows 7 there's a Microsoft E250d driver, or a Universal PCL XL driver from Lexmark... I wouldn't expect this different to be related to the problem, but I thought I'd mention it anyway. (Lexmark doesn't have a Windows 7 E250d-specific driver as far as I can see.) Any suggestions? I was thinking of upgrading my main laptop from Vista to Windows 7, but I'd really like to get this sorted first... EDIT: If I connect to http://thecus:631/printers/usb-printer via Chrome while capturing with Wireshark, I get this response: HTTP/1.1 200 OK Date: Wed, 06 Jan 2010 16:47:23 GMT Connection: Keep-Alive Keep-Alive: timeout=60 Content-Language: C Transfer-Encoding: chunked Content-Type: text/html;charset=iso-8859-1 0 No idea what that's meant to be doing... EDIT: On further consultation, this would appear to be the Internet Printing Protocol which is layered on HTTP. Printing a test page successfully from Vista posts to that URL. Will attempt the same on Windows 7...

    Read the article

  • htaccess on remote server issues - password prompt not accepting input

    - by pying saucepan
    EDIT: I will contact the university about my problem after labor day weekend, but I thought if someone knew a quick fix that I haven't tried, or if the problem has an obvious fix then I could hope to try my luck here, thanks! TLDR: Sorry its a long post, I thought I should be... thorough. I am having a common issue (found a dead thread through google with no solution to the same problem) with the prompt to enter in a username and password via htaccess rights, but this prompt will keep popping up asking for a username and password when trying to access my home directory on my university's server which has the .htaccess and .htpasswd files. It does not matter if I enter in correct or incorrect credentials, the prompt will keep asking me for input without displaying my home directory. Ever since I have included these ht files I have never once been able to get past the username/password no matter what I have tried, save for removing them from the directory I am trying to access (my top level directory that I own). This kind of served my original goal of making the top level directory inaccessible to casual users, but if I wanted to use this method on other places, I would want it to work as intended. And I also like it when computers do what I wish they would, so any help is appreciated. Some things I have tried: Changing the file/directory access rights: they told me to try these commands if people can't access my files cd ~/public_html find ./ -type d -exec chmod 755 {} \; find ./ -type f -exec chmod 644 {} \; enter in the single character name/pw at least twenty times in a row, no cheddar. so I changed directory with cd ~ in hopes that this would be my home directory, since my home directory contains the "public_html" directory, so logic tells me that the ~ tilde symbol is the top level directory that I have ownership of. Then I did those two commands to change the rights on the files inside, I am still having no luck. How I got to this point: I have been following the instructions given to me through my university's website for setting up my little directory. A link on how they describe how to password protect the home directory is given below: "Protect Web Directories" instructions I have everything in order except for one small detail that I feel probably does not matter. I am on windows and so I am using winSCP to remote control my allocated server space. The small detail is that as the instructions indicate (on step 3) that I should use the command htpasswd -c .htpasswd {username} where {username} is my folder that holds my allocated server space. But this command requires further input through the terminal, and unfortunately winSCP does not offer this kind of functionality. So I looked up some basic instructions on using htaccess and it is formatted correctly such that the .htaccess file appears as follows: AuthType Basic AuthName "Verify" AuthUserFile /correctpath/.htpasswd require valid-user and this file is in the root directory for my server space as well as the .htpasswd file which has only this data inside: username:password I know for sure that these two files must be formatted correctly, at least according to their tutorial, because before my path was incorrectly formatted via including some curly { braces } without knowing the correct way to do this at first. And the password prompt that shows up when accessing my directory responded by loading an error page indicating to contact OSU admin or something not important. But now that I have everything like it 'should' be. I know this because when I enter in my credentials "username and password" the prompt pops up for my username and password again and again whether or not I enter in correct information. The only exception is that if I click cancel it will direct me to a page saying that I need to enter in a username and password. Note that I am very inexperienced at server-related buisness, two days ago I couldn't have told you what a website actually consists of. So, if you use some technical jargon I may or may not need to look it up and get back to you before I actually understand what you mean, but I am a quick learner and it probably wont matter.

    Read the article

  • legit emails in junkbox

    - by acidzombie24
    Hey this is actually a reverse question. My personal email ([email protected]) is winding up in many peoples junkbox and I have no idea why. What may the cause be? Is it because it has the word Entrepreneur (and programmer) in my sig? is it because my first name is unique(european like)? Its driving me crazy. I sent out dozens of business emails a month to people I just meet so its actually hurting me much more then others :( -edit- I also want to mention this is non spam. Typically I email people I meet and say hi or to follow up. I was requested by someone to send him an email so I can test something, so I did and he replied to me 10 days later telling me he found it in his junk, like many others have said to me. -edit- bortzmeyer suggested emailing [email protected] I did and here are the results SPF check: pass DomainKeys check: pass DKIM check: pass Sender-ID check: pass SpamAssassin check: ham ---------------------------------------------------------- SpamAssassin check details: ---------------------------------------------------------- SpamAssassin v3.2.5 (2008-06-10) Result: ham (-2.6 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SPF_PASS SPF: sender matches SPF record -2.6 BAYES_00 BODY: Bayesian spam probability is 0 to 1% [score: 0.0000] 0.0 HTML_MESSAGE BODY: HTML included in message

    Read the article

  • DNS server and fallback outside home

    - by Jens
    I have my own DNS server at home to access local names, and that is working fine. Then I have my laptop, now obviously my laptop leaves the home now and then, therefore it accesses different nets outside my home, and my DNS server is not accessible there... So I figured that I would just add Google as secondary DNS... But actually, when I do that, then suddenly I can't access my local stuff, the page won't resolve (at home that is, obviously), like my laptop is getting a quicker response from Google's DNS or something, because it can't find anything on the addresses I use locally. If I then remove the secondary DNS, and keeps my own, then it works fine again... So do I somehow need to seperate what DNS's to use on what nets? I already use sepperate DNS settings when I connect using my 3G modem, but when I use hotspots it seems to use the same settings regardless (at least in the train), also can it differ wired connections?... Is there another solution? OS: Windows 7 Ultimate, x64 EDIT: Currently trying this "hack/fix" out for the time being: http://blog.johnruiz.com/2011/12/windows-does-not-always-honor-dns-order.html

    Read the article

  • Adding new SPNs to existing service ids

    - by jmh
    We have a tomcat server using spring-security kerberos to authenticate users to the webpage against active directory. There are around 25 domain controllers. The site has two CNAME based DNS aliases. The site currently has one Service ID with SPNs registered for the DNS A record as well as each of the CNAMEs. While everything is working right now, I don't know how to reliably change this configuration without possible downtime. The reason is that clients cache kerberos tickets: http://www.juniper.net/techpubs/en_US/uac4.2/topics/concept/user-role-active-directory-about.html The 'kerbtray.exe' program is helpful for viewing and deleting Kerberos tickets on the endpoint. Old tickets must be purged from the endpoint if SPNs are updated or passwords are changed (assuming the endpoint still has a cached copy of the ticket from a prior SPNEGO request to the MAG Series device. During testing, you should purge tickets before each authentication request. Description of "klist" program used to inspect/delete cached tickets: http://technet.microsoft.com/en-us/library/hh134826.aspx So if each of the clients (users running windows) who connect to my web server have kerberos tickets that become invalid as soon as I update the SPNs or passwords, how do I ensure changes are seamless? Are there any operations that can be done safely? I can't just ask all of the users to install klist and delete their old tickets.

    Read the article

  • Verify client certificate CN in Tomcat(APR)

    - by Petter
    I'm running a tomcat installation with the APR libraries installed (with the OpenSSL HTTPS stack that comes with it). What I'm trying to do is to lock a specific HTTPS connector down to users of a specific certificate. Adding client certificate verification is no issue, but I can't get it to validate against a specific Common name only. I was perhaps a bit naïve and thought the mod_ssl attribute SSLRequire typically used in Apache Httpd would work, but that property is not recognized by the Tomcat implementation. (http://tomcat.apache.org/tomcat-7.0-doc/config/http.html#SSL%20Support points to some mod_ssl docs, but the Tomcat implementation does not seem to cover all aspects of mod_ssl). I can get this to work by using the Java version of the connector instead of APR (losing some performance) and just add a trust store with that one certificate in it. However, using openssl without the SSLRequire expressions, I'm not sure how to do this with Tomcat7 (on Windows if that matters). <Connector protocol="HTTP/1.1" port="443" maxThreads="150" scheme="https" secure="true" SSLEnabled="true" SSLCertificateFile="mycert.pem" SSLCertificateKeyFile="privkey.pem" SSLCACertificateFile="CABundle.pem" SSLVerifyClient="require" SSLProtocol="TLSv1" SSLRequire="(%{SSL_CLIENT_S_DN_CN} eq &quot;host.example.com&quot;)"/> Can you suggest a way to make this work using Tomcat/APR/OpenSSL?

    Read the article

  • Can't connect to Synology DiskStation through HTTPS when using Windows 7 Import

    - by LeonidasFett
    a little background to my problem: I have a Synology DiskStation 213j that I use as a backup/data storage solution. When I'm at work, I would like to push and pull files from my DiskStation but I can't use VPN which is forbidden for outgoing connections. So I wanted to try to use HTTPS so I can at least connect securely to the web interface. I mostly use Chrome which uses the Windows Certificate Store. So I tried importing a self-signed certificate into it, without success. I still get a warning in Chrome telling me the connection is not secure because it can't be verified. When I import the certificate into Firefox though, it works and I can connect through HTTPS. I checked my domain on this site: http://www.sslshopper.com/ssl-checker.html It shows no errors, only a warning that the certificate is self-signed. Which is OK in this case. Any got any idea why importing the certificate into Windows 7 doesn't work? I tried Right-Click domain.mydomain.de.crt File --> Install certificate --> Next --> both options here (in case of "Place certificate in following store:" I selected "Third Party Root Certificate Authorities") to no avail.

    Read the article

  • How can I configure apache to cache the images that it is serving? Right now it is giving headers t

    - by Tchalvak
    Serving up images that don't seem to cache There's a LAPP (postgresql instead of mysql) running over on http://ninjawars.net. I just recently noticed that images don't seem to be caching with any kind of good frequency as I was reloading a page with a few images on it here: http://www.ninjawars.net/attack_player.php Here is an example image (they're probably all being served exactly the same): http://www.ninjawars.net/images/characters/fighter.png Checking the header, it seems that the caching is set to: Cache-Control:max-age=0 (the full header for this image-like-all-the-others is... Request URL:http://www.ninjawars.net/images/characters/fighter.png Request Method:GET Status Code:200 OK Request Headers Accept:application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Cache-Control:max-age=0 Referer:http://www.ninjawars.net/images/characters/fighter.png User-Agent:Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.3 Safari/533.4 Response Headers Accept-Ranges:bytes Content-Length:938 Content-Type:image/png Date:Thu, 13 May 2010 21:24:07 GMT ETag:"ffd4d-3aa-4837efc120540" Last-Modified:Mon, 05 Apr 2010 15:28:45 GMT Server:Apache ) So what modules or config or htaccess or whatever do I change to have it cache images, e.g. for 24 hours?

    Read the article

  • Installing Ruby 1.9.3 OSX 10.7.4 breaks after altering PATH

    - by R V
    I was having trouble installing ruby 1.9.3-p194 from ruby 1.8.7 on my mac osx 10.7.4. I have was trying to fix my homebrew after running "brew doctor" and got the message of "/usr/bin occurs before /usr/local/bin This means that system-provided programs will be used instead of those provided by Homebrew. The following tools exist at both paths: c++-4.2 cpp-4.2 erb g++-4.2 gcc-4.2 gcov-4.2 gem i686-apple-darwin11-cpp-4.2.1 i686-apple-darwin11-g++-4.2.1 i686-apple-darwin11-gcc-4.2.1 irb rake rdoc ri ruby testrb" I fixed it by entering the following, which I found on another stackoverflow answer: export PATH="/usr/local/bin:/usr/local/sbin:~/bin$PATH" Lo and behold! when I typed that ruby updates to 1.9.3-p194. Ruby files seem to compile and run just fine. However, afterward, my navigation around terminal is messed up severely. For instance I can't do the command "open example_file.html" and have the file pop up in Chrome, instead I get the error: "-bash: open: command not found" Also, when I change directory, I get an error, inputting "$ cd desktop" yields the output, "-bash: dirname: command not found" but the directory does then changes... strange. When I exit out of a terminal window all this resets. I'm back to Ruby 1.8.7, have to use the PATH command again to update to 1.9.3, command line navigation gets broken again. Any guidance on how to remedy so I can use 1.9.3-p194 and also have normal terminal navigation would be greatly appreciated.

    Read the article

  • SFTP ChRoot result in broken pipe

    - by Patrick Pruneau
    I have a website that I want to add some restricted access to a sub-folder. For this, I've decided to use CHROOT with SFTP (I mostly followed this link : http://shapeshed.com/chroot_sftp_users_on_ubuntu_intrepid/) For now, I've created a user (sio2104) and a group (magento).After following the guide, my folder list look like this : -rw-r--r-- 1 root root 27 2012-02-01 14:23 index.html -rw-r--r-- 1 root root 21 2012-02-01 14:24 info.php drwx------ 15 root root 4096 2012-02-25 00:31 magento As you can see, i've chown root:root the folder magento I wanted to jail-in the user and ...everything else by the way. Also in the magento folder, I chown sio2104:magento everything so they can access what they want. Finally, I've added this to sshd_config file : #Subsystem sftp /usr/lib/openssh/sftp-server Subsystem sftp internal-sftp Match Group magento ChrootDirectory /usr/share/nginx/www/magento ForceCommand internal-sftp AllowTCPForwarding no X11Forwarding no PasswordAuthentication yes #UsePAM yes And the result is...well, I can enter my login, password and it's all finished with a "broken pipe" error. $ sftp [email protected] [....some debug....] [email protected]'s password: debug1: Authentication succeeded (password). Authenticated to 10.20.0.50 ([10.20.0.50]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Write failed: Broken pipe Connection closed Verbose mode gives nothing to help. Anyone have an idea of what I've done wrong? If I try to login with ssh or sftp with my personnal user, everything works fine.

    Read the article

  • mod_cache not working

    - by Pistos
    I have a PHP site that has many dynamically generated pages. I'm trying to turn to mod_cache to help boost performance, because in most cases, content does not change in a given day. I have configured mod_cache as best I could, following examples around the web, including the mod_cache page on apache.org. When I set LogLevel debug, I see a bit of information about the caching that is [not] happening. There are plenty of pairs of lines like this: [Fri Jun 01 17:28:18 2012] [debug] mod_cache.c(141): Adding CACHE_SAVE filter for /foo/bar [Fri Jun 01 17:28:18 2012] [debug] mod_cache.c(148): Adding CACHE_REMOVE_URL filter for /foo/bar Which is fine, because I've set CacheEnable disk /foo, to indicate that I want everything under /foo cached. I'm new to mod_cache, but my understanding about these lines is that it just means that mod_cache has acknowledged that the URL is supposed to be cached, but there are supposed to be more lines indicating that it is saving the data to cache, and then later retrieving them on subsequent hits to the same URL. I can hit the same URL till I'm blue in the face, whether with F5 refreshing, or not, or with different browsers, or different computers. It's always that pair of lines that shows in the logs, and nothing else. When I set CacheEnable disk /, then I see more activity. But I don't want to cache the entire site, and there are many, many different subpaths to the site, so I don't want to have to modify code to set no-cache headers in all the necessary places. I'll mention that mod_rewrite is in use here, rewriting /foo/bar to something like index.php?baz=/foo/bar, but my understanding is that mod_cache uses the pre-rewrite URL, not the post-rewrite URL. As far as I can tell, I have the response headers not getting in the way of caching. Here's an example from one hit: Cache-Control:must-revalidate, max-age=3600 Connection:Keep-Alive Content-Encoding:gzip Content-Length:16790 Content-Type:text/html Date:Fri, 01 Jun 2012 21:43:09 GMT Expires:Fri, 1 Jun 2012 18:43:09 -0400 Keep-Alive:timeout=15, max=100 Pragma: Server:Apache Vary:Accept-Encoding mod_cache config is as follows: CacheRoot /var/cache/apache2/ CacheDirLevels 3 CacheDirLength 2 CacheEnable disk /foo What is getting in the way of mod_cache doing its job of caching?

    Read the article

  • Outlook 2007/2010 autodiscovering old Exchange info

    - by Dan
    I currently have an Exchange setup as follows: two Exchange 2003 servers clustered together set up as the current mailbox stores, one Exchange 2003 setup as a frontend, one Exchange 2007 set up as a frontend (was set up for testing by my predecessor, never really used intentionally), and now four Exchange 2010 servers - two mailboxes in a DAG and two with Hub/CAS. Everything seems to be working fine with one exception - Outlook 2007/2010 clients are still autodiscovering the test 2007 frontend and not the 2010 CAS array. I know this because there's an expired cert on the 2007 box so the client displays a cert error when you attempt to autocreate the outlook profile. From what I've read, there is an SCP (Service Connection Point) in AD that is pointing to the old server and it is getting returned first, causing Outlook to try it first. How can I prevent Outlook from even attempting to connect to this 2007 box from now on? http://www.msexchange.org/articles_tutorials/exchange-server-2010/management-administration/exchange-autodiscover.html When Outlook 2007 is installed on a domain joined workstation then the Outlook client will query Active Directory for the Autodiscover information. Active Directory will return a list of SCP’s and the Outlook client will automatically select the first SCP in this list. Using the information found in the SCP the Outlook client will contact the Client Access Server for its configuration information and the Outlook client will be configured automatically.

    Read the article

  • first time setting up ssl, running into a strange problem, tutorials haven't been too helpful

    - by pedalpete
    This is my first time trying to set-up an ssl for one a site, and I'm running it on a server that has 3 other sites already hosted. I'm running apache2.?? and the install came with an ssl.conf page. The ssl.conf has the following settings LoadModule ssl_module modules/mod_ssl.so Listen 443 AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl <VirtualHost *:443> ServerAdmin [email protected] DocumentRoot /var/www/html/securesite ServerName securesite.com ErrorLog logs/securesite-error_log CustomLog logs/securesite-access_log common SSLEngine on SSLCertificateFile /etc/httpd/ssl.crt/securesite.com.crt SSLCertificateKeyFile /etc/httpd/ssl.key/server.key SSLCertificateChainFile /etc/httpd/ssl.crt/gd_bundle.crt </VirtualHost> When I run 'apachectl configtest', I don't get any errors, but running 'apachectl -k restart', I get 'httpd not running, trying to start'. I have two questions 1) Is there an error in the way I'm defining my virtualhost for 443?? the rest of my entries point to <VirtualHost *:80. When I comment out the above entry, apache runs fine. 2) do I need to set-up a redirect from port 80 for secure site? Because most users are going to go to http: or www. , and I need to send them to https: does apache do this automatically? or do i need to create an entry with a redirect?

    Read the article

  • CentOs 6.3 can't install Git

    - by drivard
    I am trying to install git on an OpenVZ container using the CentOs 6.3 precreated template. When I try the command line yum install git I get the message: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: www.cubiculestudio.com * epel: mirror.csclub.uwaterloo.ca * extras: www.cubiculestudio.com * rpmforge: mirror.us.leaseweb.net * updates: www.cubiculestudio.com Setting up Install Process No package git available. Error: Nothing to do As I understand, the git package should be in the centos6 base repository: http://pkgs.org/centos-6-rhel-6/centos-rhel-x86_64/git-1.7.1-2.el6_0.1.x86_64.rpm.html But it doesn't find it, I even have the EPEL and RPMForge repo enabled, but still can't find the git package. yum repolist Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: www.cubiculestudio.com * epel: mirror.csclub.uwaterloo.ca * extras: www.cubiculestudio.com * rpmforge: mirror.us.leaseweb.net * updates: www.cubiculestudio.com repo id repo name status base CentOS-6 - Base 4,776 epel Extra Packages for Enterprise Linux 6 - i386 6,523 extras CentOS-6 - Extras 4 rpmforge RHEL 6 - RPMforge.net - dag 4,501 updates CentOS-6 - Updates 596 vz-base vz-base 3 vz-updates vz-updates 0 repolist: 16,403 The weirdest thing is that my OpenVZ server is running on CentOs 6.3 and I was able to install git without any issue. Can you help me understanding why it doesn't find the package? Thank you in advance.

    Read the article

  • Painfully slow login to AD bound Mac OS X Leopard machine when off home network

    - by GeeBee
    Dear all Just looking for a little help with this problem that seems to trip a lot of people up and is causing me no end of grief. I have a number of fully patched OS X Leopard machines that are bound to my AD (Server 2003). When on the home network, logging in seems swift and works as expected. When users take the machines off site, login can take 5 minutes or more. The user adds correct credentials but the desktop does not appear for a very long time. Outside the office, I have tried logging in using a local Admin account, switching off Airport and then logging in using an AD account. In this situation login is immediate again. It all seems as if Leopard is finding a suitable wireless network, spending far too long looking for the Domain before eventually giving up and using the cached credentials instead. I have read that disabling Bonjour on the machine will stop this problem (i have not yet tested) http://www.macwindows.com/leopardAD.html#111607z ...but I am reluctant to use this "Solution" as I would like to be able to use Bonjour on the local network as well as having AD-bound machines. However, is disabling Bonjour really the only answer? Is there not some time-out setting somewhere that could be amended to stop Leopard spending forever looking for home? Any help would be very gratefully received Thanks Gordon

    Read the article

  • hostapd running on Ubuntu Server 13.04 only allows single station to connect when using wpa

    - by user450688
    Problem Only a single station can connect to hostapd at a time. Any single station can connect (W8, OSX, iOS, Nexus) but when two or more hosts are connected at the same time the first client loses its connectivity. However there are no connectivity issues when WPA is not used. Setup Linux (Ubuntu server 13.04) wireless router (with separate networks for wired WAN, wired LAN, and Wireless LAN. iptables-save output: *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -s 10.0.0.0/24 -o p4p1 -j MASQUERADE -A POSTROUTING -s 10.0.1.0/24 -o p4p1 -j MASQUERADE COMMIT *mangle :PREROUTING ACCEPT [13:916] :INPUT ACCEPT [9:708] :FORWARD ACCEPT [4:208] :OUTPUT ACCEPT [9:3492] :POSTROUTING ACCEPT [13:3700] COMMIT *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [9:3492] -A INPUT -i p4p1 -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -i p4p1 -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT -A INPUT -i eth0 -j ACCEPT -A INPUT -i wlan0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A FORWARD -i p4p1 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i eth0 -j ACCEPT -A FORWARD -i wlan0 -j ACCEPT -A FORWARD -i lo -j ACCEPT COMMIT /etc/hostapd/hostapd.conf #Wireless Interface interface=wlan0 driver=nl80211 ssid=<removed> hw_mode=g channel=6 max_num_sta=15 auth_algs=3 ieee80211n=1 wmm_enabled=1 wme_enabled=1 #Configure Hardware Capabilities of Interface ht_capab=[HT40+][SMPS-STATIC][GF][SHORT-GI-20][SHORT-GI-40][RX-STBC12] #Accept all MAC address macaddr_acl=0 #Shared Key Authentication wpa=1 wpa_passphrase=<removed> wpa_key_mgmt=WPA-PSK wpa_pairwise=CCMP rsn_pairwise=CCMP ###IPad Connectivevity Repair ieee8021x=0 eap_server=0 Wireless Card #lshw output product: RT2790 Wireless 802.11n 1T/2R PCIe vendor: Ralink corp. physical id: 0 bus info: pci@0000:03:00.0 logical name: mon.wlan0 version: 00 serial: <removed> width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list logical wireless ethernet physical configuration: broadcast=yes driver=rt2800pci driverversion=3.8.0-25-generic firmware=0.34 ip=10.0.1.254 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn #iw list output Band 1: Capabilities: 0x272 HT20/HT40 Static SM Power Save RX Greenfield RX HT20 SGI RX HT40 SGI RX STBC 2-streams Max AMSDU length: 3839 bytes No DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 2 usec (0x04) HT RX MCS rate indexes supported: 0-15, 32 TX unequal modulation not supported HT TX Max spatial streams: 1 HT TX MCS rate indexes supported may differ Frequencies: * 2412 MHz [1] (27.0 dBm) * 2417 MHz [2] (27.0 dBm) * 2422 MHz [3] (27.0 dBm) * 2427 MHz [4] (27.0 dBm) * 2432 MHz [5] (27.0 dBm) * 2437 MHz [6] (27.0 dBm) * 2442 MHz [7] (27.0 dBm) * 2447 MHz [8] (27.0 dBm) * 2452 MHz [9] (27.0 dBm) * 2457 MHz [10] (27.0 dBm) * 2462 MHz [11] (27.0 dBm) * 2467 MHz [12] (disabled) * 2472 MHz [13] (disabled) * 2484 MHz [14] (disabled) Bitrates (non-HT): * 1.0 Mbps * 2.0 Mbps (short preamble supported) * 5.5 Mbps (short preamble supported) * 11.0 Mbps (short preamble supported) * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps max # scan SSIDs: 4 max scan IEs length: 2257 bytes Coverage class: 0 (up to 0m) Supported Ciphers: * WEP40 (00-0f-ac:1) * WEP104 (00-0f-ac:5) * TKIP (00-0f-ac:2) * CCMP (00-0f-ac:4) Available Antennas: TX 0 RX 0 Supported interface modes: * IBSS * managed * AP * AP/VLAN * WDS * monitor * mesh point software interface modes (can always be added): * AP/VLAN * monitor valid interface combinations: * #{ AP } <= 8, total <= 8, #channels <= 1 Supported commands: * new_interface * set_interface * new_key * new_beacon * new_station * new_mpath * set_mesh_params * set_bss * authenticate * associate * deauthenticate * disassociate * join_ibss * join_mesh * set_tx_bitrate_mask * set_tx_bitrate_mask * action * frame_wait_cancel * set_wiphy_netns * set_channel * set_wds_peer * Unknown command (84) * Unknown command (87) * Unknown command (85) * Unknown command (89) * Unknown command (92) * testmode * connect * disconnect Supported TX frame types: * IBSS: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * managed: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * AP: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * AP/VLAN: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * mesh point: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * P2P-client: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * P2P-GO: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * Unknown mode (10): 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 Supported RX frame types: * IBSS: 0x40 0xb0 0xc0 0xd0 * managed: 0x40 0xd0 * AP: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * AP/VLAN: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * mesh point: 0xb0 0xc0 0xd0 * P2P-client: 0x40 0xd0 * P2P-GO: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * Unknown mode (10): 0x40 0xd0 Device supports RSN-IBSS. HT Capability overrides: * MCS: ff ff ff ff ff ff ff ff ff ff * maximum A-MSDU length * supported channel width * short GI for 40 MHz * max A-MPDU length exponent * min MPDU start spacing Device supports TX status socket option. Device supports HT-IBSS.

    Read the article

  • How can I download a copy of an S3 public data set?

    - by tripleee
    i was naively assuming I could do something like s3cmd sync s3://snap-d203feb5 /var/tmp/copy but I seem to have the wrong idea of how to go about this. I cannot even get a simple thing to work; vnix$ s3cmd ls s3://snap-d203feb5 Bucket 'snap-d203feb5': ERROR: Bucket 'snap-d203feb5' does not exist I guess the identifier I have is not for a "bucket" but for a "public data set". How do I go from one to the other? Do I have to start up an EC2 instance and create a bucket for this? How? The instructions at http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-public-data-sets.html seem to assume I want to use the data in an EC2 instance, but in this case, I'd just like to browse a bit, at least for a start. By the by, copy/pasting the "US Snapshot ID" causes a nasty traceback from Python; they publish the ID with a weird Unicode (I presume) dash which cannot directly be copy/pasted. Is there a mistake when I copy it? And what's the significance of "US" in there? Can't I use the data outside North America??

    Read the article

  • How do I hook into Tar with BASH?

    - by orb
    Long Story Short I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images. Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images? In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system. Background Information not included in the Long Story Short I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.

    Read the article

  • How to fix massive lag on ZyXEL HomePlug AV powerline adapters?

    - by Tim Abell
    I have 3 ZyXEL Homeplug AV powerline adapters as per the one in the review below. I have two plugged in currently, one into my Be / Thompson wireless router, and one into my desktop pc (box1). every now and then the link indicator on the adapters (the mains link, not the ethernet link) goes nutty, and performance falls off a cliff (see below). http://www.gadgetspeak.com/gadget/article.rhtm/753/479266/ZyXEL_PowerLine_HomePlug_AV_PLA401.html 64 bytes from box1 (192.168.1.101): icmp_seq=1064 ttl=64 time=996 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1065 ttl=64 time=549 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1066 ttl=64 time=6.15 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1067 ttl=64 time=1400 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1068 ttl=64 time=812 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1069 ttl=64 time=11.1 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1070 ttl=64 time=1185 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1071 ttl=64 time=501 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1072 ttl=64 time=1975 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1073 ttl=64 time=970 ms ^C --- box1 ping statistics --- 1074 packets transmitted, 394 received, +487 errors, 63% packet loss, time 1082497ms rtt min/avg/max/mdev = 5.945/598.452/3526.454/639.768 ms, pipe 4 Any idea how to diagnose/fix? I'm on linux so installing the windoze software that came with them is not something I'm terribly keen to do.

    Read the article

  • How can Standard User change file associations in Windows 2000?

    - by Gary M. Mugford
    One of my clients is still running Win2K server with a host of Win2K workstations. And no net admin, due to the downturn of the economy over the years. I'm sort of helping out. Out of my depth, but I am a loyal foot soldier. A problem I encounter rather too often is a user double-clicks on a file in Explorer and then either gets no action, or the wrong program to run. It's a case of a missing or out-of-date file association. The current cure is to temporarily upgrade the user from Standard to Power, do the FA switch and then change back. As Winnie would whine, 'Oh, bother!' At any rate, I thought I'd ask here. Is there a method/program to run without the rigamarole FROM the Standard Users account on the workstation to edit/add a file association? I assume the program route would involve RunAs. I 'believe' most of the workstations run the RunAs service, but I could be wrong. I understand that's required, if there is to be a solution. Any help accepted with thanks. GM NOTE: Seems wassociate from http://www.xs4all.nl/~wstudios/Associate/index.html can resolve the issue.

    Read the article

  • How can Standard User change file associations in Windows 2000?

    - by Gary M. Mugford
    One of my clients is still running Win2K server with a host of Win2K workstations. And no net admin, due to the downturn of the economy over the years. I'm sort of helping out. Out of my depth, but I am a loyal foot soldier. A problem I encounter rather too often is a user double-clicks on a file in Explorer and then either gets no action, or the wrong program to run. It's a case of a missing or out-of-date file association. The current cure is to temporarily upgrade the user from Standard to Power, do the FA switch and then change back. As Winnie would whine, 'Oh, bother!' At any rate, I thought I'd ask here. Is there a method/program to run without the rigamarole FROM the Standard Users account on the workstation to edit/add a file association? I assume the program route would involve RunAs. I 'believe' most of the workstations run the RunAs service, but I could be wrong. I understand that's required, if there is to be a solution. Any help accepted with thanks. GM NOTE: Seems wassociate from http://www.xs4all.nl/~wstudios/Associate/index.html can resolve the issue.

    Read the article

  • svnsync loses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. I also tried to manually recopy via svnsync copy-revprops (http://chestofbooks.com/computers/revision-control/subversion-svn/svnsync-Copy-revprops-Ref-svnsync-C-Copy-revprops.html). It says Copied properties for revision 19885. But when I query them, it's just the same. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • IIS doesn't serve certain file extensions

    - by Alekc
    Hi, i have this weird issue on Win 2k3 server and IIS: Iis has several sites, in one of them i need to create a subdir and set up it as web application. I've noticed that if i create new directory and put some .js/.txt file into it, they will not be served by iis (IE gives an error Internet Explorer cannot display the webpage). If i put the same files in another old site's subdirectory it will show correctly. By sniffing traffic i've seen that iis reply connection state 200 and then drop completely any connection http://domain.com/test2/prova.txt GET /test2/prova.txt HTTP/1.1 Host: domain.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive HTTP/1.x 200 OK If i rename file prova.txt in prova.asp for example it showing without problems so it shouldn't be permissions issue. After making some researches I've found out that it can be caused by missing mime types, I've checked out .txt and .js are present and served by aspnet_isapi.dll. And here comes another weird thing: if i remove mime mapping from directory's properties it's served correctly, but the same thing doesn't work with js. I'm really beginning to be out of ideas, is there someone who have some hint? Thanks in advance.

    Read the article

< Previous Page | 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203  | Next Page >