Search Results

Search found 35475 results on 1419 pages for 'text shadow'.

Page 1161/1419 | < Previous Page | 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168  | Next Page >

  • Having trouble mapping Sharepoint document library as a Network Place

    - by Sdmfj
    I am using Office 365, Sharepoint Online 2013. Using Internet Explorer these are the steps I have taken: ticked the keep me signed in on the portal.microsoftonline.com page. It redirects me to Godaddy login page because Office 365 was purchased through them. I have added these sites to trusted sites (as well as every page in the process) and chose auto logon in Internet explorer. Once on the document library I open as explorer and copy the address as text. I go to My Computer and right click to add a network place and paste in the document library address. It successfully adds the library as a network place 30% of the time. I can do this same process 3 times in a row and it will fail the first 2 times and then succeeds. It works for a little while and then I get an error that the DNS cannot be found. I need multiple users in our organization to be able to access this document library as if it was a mapped network drive on our local network. Is there an easier way to do this? I may just sync using the One Drive app but thought that direct access to the files without worrying about users keeping their files synced.

    Read the article

  • "Enter/Return" Key with Word Mobile on Windows Mobile

    - by Maarx
    Some of our employees are using PDAs running Windows Mobile. I wish I could provide more data regarding versions, but frankly these things aren't my jurisdiction. Someone's simply come to me looking for what they thought would be a quick fix. They're using Word Mobile and the barcode scanner to record large volumes of data. The scanner's default action is to insert the scanned text exactly as if it had been input with the keyboard, and puts a newline at the end. That's great, because it's exactly what we need it to do: separate data with newlines. The issue comes when system can't read the barcode, and the employee has to type in the data by hand. They've discovered a very peculiar quirk of Mobile applications: pressing the hardware Enter/Return key on the keyboard appears to save and exit the application. How do we change this behavior? They've realized that using the stylus to "click" the virtual on-screen keyboard's Enter/Return key will add the necessary newline, but it's a huge inconvenience for them. How do I fix the default behavior of the Enter/Return key for Word Mobile to instead insert a newline?

    Read the article

  • Copy UNC network path (not drive letter) for paths on mapped drives from Windows Explorer

    - by Ernest Mueller
    I frequently want to share network paths to files with other folks on my team via email or chat. We have a lot of mapped drives here, both ones we set up ourselves and ones set up by our IT overlords. What I'd like to be able to do is to copy the full real path (not the drive letter) from Windows Explorer to send to folks. Example: I have a file in my "Q:" drive, \cartman\users\emueller, I want to send a link to file foo.doc to everyone. When I copy the file path (shift+right click, "copy as path") it gets the file name "Q:\foo.doc". This is unhelpful to others, who would like to see \cartman\users\emueller\foo.doc, obviously. In Explorer it clearly knows it - in the address bar I see "Computer - emueller (\cartman\users) (Q:) -". Is there a way to say "hey man copy that path as text with the \cartman\users\emueller not the Q: in it?" I know I could just set up mapped network locations instead of the mapped drives for the ones that I set up personally and avoid this problem, but most of the mapped drives like the "users" share come from our IT policy. I could just make a separate network location and then ignore my Q: drive but that's inconvenient (and they do it so they can move accounts across servers). Sure my emailed path might eventually break because I'm losing the drive letter indirection but that's OK with me.

    Read the article

  • 2010 cgi script failure

    - by Barry F
    Hi. I hope you can help, I'm just a beginner! I have listed a few extra details which may not be relevant. I upload cgi scripts onto local/personal directory on a Apache/2.2.10 server, using FTP95Pro in ASCII. The scripts execute correctly using perl on my web-server in a terminal session. Thus my code has no fatal syntax errors. Webpages 'action' each cgi script at /cgi-bin/. There are symbolic links which link system directory files to my local directory files. FollowSymLinks is enabled (unsure how). Permissions are correct (755). This set-up hasnt changed, apparently. The scripts have excuted perfectly for years, up to 2010. But now, in 2010, I have replaced working scripts with new script/files, now with exactly the same text, filename and permissions. Only the date (last modified) has changed. But now I receive a 500 Internal Server Error, and cannot determine why. My server administator assumes I have code errors. But code is unchanged since last year, and it runs fine (albeit no arguments) on web-server console using perl myscript.cgi Is there anything you can think of which may have changed ? I'm suspicious of the new decade. I think the server swapped from Linux to Windows OS last year, but my server administrator got it all working OK. Is there something unusual he may have missed, related to 2010 ? Thank you in advance

    Read the article

  • why altgr+p doesn't work, AUTOHOTKEY

    - by voodoomsr
    Hi guys, i try and i try to find the bug in this script but i can't . Maybe some of you can give me a hint... Problem. When i press altgr&p it suppose that the Delete key is triggered, but the weird thing is that after one succesfull delete, if i continue pressing altgr&p appears the p, and the delete isn't triggered anymore. in the meantime i test other solution move to the right and then delete with the backspace, that works, but when i have text selected this alternative isn't good.... here is the code #InstallKeybdHook ;characters very used RAlt & e:: SendInput []{Left} Return RAlt & w:: SendInput <>{Left} Return RAlt & d:: SendInput (){Left} Return RAlt & s:: SendRaw {} SendInput {Left} Return RAlt & x:: SendInput ""{Left} Return RAlt & c:: SendInput ''{Left} Return RAlt & f:: SendInput * Return RAlt & r:: SendRaw + Return RAlt & v:: SendInput - Return ;comienzo y fin de linea RAlt & a:: SendInput {Home} Return RAlt & z:: SendInput {End} Return ;movimientos InEditon /* RAlt & p:: SendInput {Right}{BackSpace} Return */ <^>!p:: Send {Del} Return RAlt & o:: SendInput {Up} Return RAlt & l:: SendInput {Down} Return RAlt & k:: SendInput {Left} Return RAlt & ñ:: SendInput {Right} Return RAlt & ,:: SendInput {Enter} Return RAlt & i:: SendInput {BackSpace} Return ;; clipx ^mbutton:: sendinput ^+{insert} Return ^+k::^+Left +k::+Left ^k::Left +l::+Down ^+l::^+Down ^l::^Down +ñ::+Right ^+ñ::^+Right ^ñ::^Right +o::+Up ^+o::^+Up ^o::^Up +a::+Home ^+a::^+Home +z::+End ^+z::^+End

    Read the article

  • How to test nginx proxy timeouts

    - by mkorszun
    Target: I would like to test all Nginx proxy timeout parameters in very simple scenario. My first approach was to create really simple HTTP server and put some timeouts: Between listen and accept to test proxy_connect_timeout Between accept and read to test proxy_send_timeout Between read and send to test proxy_read_timeout Test: 1) Server code (python): import socket import os import time import threading def http_resp(conn): conn.send("HTTP/1.1 200 OK\r\n") conn.send("Content-Length: 0\r\n") conn.send("Content-Type: text/xml\r\n\r\n\r\n") def do(conn, addr): print 'Connected by', addr print 'Sleeping before reading data...' time.sleep(0) # Set to test proxy_send_timeout data = conn.recv(1024) print 'Sleeping before sending data...' time.sleep(0) # Set to test proxy_read_timeout http_resp(conn) print 'End of data stream, closing connection' conn.close() def main(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('', int(os.environ['PORT']))) s.listen(1) print 'Sleeping before accept...' time.sleep(130) # Set to test proxy_connect_timeout while 1: conn, addr = s.accept() t = threading.Thread(target=do, args=(conn, addr)) t.start() if __name__ == "__main__": main() 2) Nginx configuration: I have extended Nginx default configuration by setting explicitly proxy_connect_timeout and adding proxy_pass pointing to my local HTTP server: location / { proxy_pass http://localhost:8888; proxy_connect_timeout 200; } 3) Observation: proxy_connect_timeout - Even though setting it to 200s and sleeping only 130s between listen and accept Nginx returns 504 after ~60s which might be because of the default proxy_read_timeout value. I do not understand how proxy_read_timeout could affect connection at so early stage (before accept). I would expect 200 here. Please explain! proxy_send_timeout - I am not sure if my approach to test proxy_send_timeout is correct - i think i still do not understand this parameter correctly. After all, delay between accept and read does not force proxy_send_timeout. proxy_read_timeout - it seems to be pretty straightforward. Setting delay between read and write does the job. So I guess my assumptions are wrong and probably I do not understand proxy_connect and proxy_send timeouts properly. Can some explain them to me using above test if possible (or modifying if required).

    Read the article

  • IPv6 working fine, IPv4 throws OpenSSL error

    - by jippie
    I am building a webserver ( http://blog.linformatronics.nl/ ), which functions just fine on both IPv4 and IPv6 and when using a non-SSL connection. However when I connect to it through https, IPv6 works as expected, but an IPv4 connection throws a client side error. Server side logs are empty for the IPv4/https connection. Summarized in a table: | http | https -----+-------+------------------------------------------------------- IPv4 | works | OpenSSL error, failed. No server side logging. -----+-------+------------------------------------------------------- IPv6 | works | self signed certificate warning, but works as expected Apparently the SSL tunnel isn't even set up, which accounts for the Apache logs being empty. But why does it work fine for IPv6 and fail for IPv4? My question is why is this OpenSSL error being thrown and how can I solve it? Below is some extra information about the setup. IPv6 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -6 https://blog.linformatronics.nl --2012-11-03 15:46:48-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 2001:980:1b7f:1:a00:27ff:fea6:a2e7 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|2001:980:1b7f:1:a00:27ff:fea6:a2e7|:443... connected. WARNING: cannot verify blog.linformatronics.nl's certificate, issued by `/CN=localhost': Self-signed certificate encountered. WARNING: certificate common name `localhost' doesn't match requested host name `blog.linformatronics.nl'. HTTP request sent, awaiting response... 200 OK Length: 4556 (4.4K) [text/html] Saving to: `/dev/null' 100%[=======================================================================>] 4,556 --.-K/s in 0s 2012-11-03 15:46:49 (62.5 MB/s) - `/dev/null' saved [4556/4556] IPv4 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -4 https://blog.linformatronics.nl --2012-11-03 15:47:28-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 82.95.251.247 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|82.95.251.247|:443... connected. OpenSSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol Unable to establish SSL connection. Notes I am on Ubuntu Server 12.04.1 LTS

    Read the article

  • Why does running "$ sudo chmod -R 664 . " cause me to get access denied on all affected directories?

    - by Codemonkey
    I have a project folder which has messy permissions on all files. I've had the bad tendency of setting everything to octal permissions 777 because it solved all non security related issues. Then FTP uploads, files created by text editors etc. has their own set of permissions making everything a mess. I've decided to take myself together and start using the permissions the way they were meant to be used. I figured 664 was a good default for all my files and folders, and I'd just remove permissions for others on private files, and add +x for executable files. The second I changed my project folder to 664 however: $ sudo chmod -R 664 . $ ls ls: cannot open directory .: Permission denied Which makes no sense to me. I have read/write permissions, and I'm the owner of the project folder. The leftmost part of ls -l in my project folder looks like this: -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 5 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 3 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 4 codemonkey codemonkey ... drw-rw-r-- 5 codemonkey codemonkey ... I assume this has something to do with the permissions on the directories, but what?

    Read the article

  • Timeout option not working on efi windows 7/windows8 dual boot machine

    - by Guenter
    I hav a gigbyte GA-Z77m-D3h mobo and installed Windows 8 Pro and Windows 7 Ultimate on two SSDs (in that order) in EFI mode. Now when I start my computer, I get the windows boot menu (text mode) with the two OSses to choose, but I have to manually press RETURN to have the computer boot into the Win OS. Even if I wait an hour, no default action takes place. Using bcdedit (from either of the OSses) I can successfully change the time out value, and it shows up in the bcdedit (no params) output. But it doesn't fire ... Here is my current BCDEdit output (headers are in German, but values should be readable): Windows-Start-Manager --------------------- Bezeichner {bootmgr} device partition=O: path \EFI\Microsoft\Boot\bootmgfw.efi description Windows Boot Manager locale de-DE inherit {globalsettings} integrityservices Enable default {default} resumeobject {5ad2802c-c60a-11e2-acdb-80331c501b11} displayorder {default} {current} {5ad2802a-c60a-11e2-acdb-80331c501b11} {5ad28028-c60a-11e2-acdb-80331c501b11} {5ad28029-c60a-11e2-acdb-80331c501b11} toolsdisplayorder {memdiag} timeout 5 displaybootmenu Yes Windows-Startladeprogramm ------------------------- Bezeichner {default} device partition=W: path \Windows\system32\winload.efi description Windows 7 locale de-DE inherit {bootloadersettings} recoverysequence {5ad2802e-c60a-11e2-acdb-80331c501b11} recoveryenabled Yes osdevice partition=W: systemroot \Windows resumeobject {5ad2802c-c60a-11e2-acdb-80331c501b11} nx OptIn Windows-Startladeprogramm ------------------------- Bezeichner {current} device partition=C: path \Windows\system32\winload.efi description Windows 8 locale de-DE inherit {bootloadersettings} recoverysequence {5ad28033-c60a-11e2-acdb-80331c501b11} integrityservices Enable recoveryenabled Yes isolatedcontext Yes allowedinmemorysettings 0x15000075 osdevice partition=C: systemroot \Windows resumeobject {5ad28031-c60a-11e2-acdb-80331c501b11} nx OptIn bootmenupolicy Standard hypervisorlaunchtype Auto (this output is from Win8; the Win7 looks nearly identical) If maybe the problem comes from a bad EFI Windows boot manager installation, can this be fixed without loosing my windows installations?

    Read the article

  • One Apache VirtualHost entry overrides another?

    - by johnlai2004
    I can't tell why one apache virtual host entry keeps overriding another. The following file // filename: cbl <VirtualHost 74.207.237.23:80> ServerAdmin [email protected] ServerName completebeautylist.com ServerAlias www.completebeautylist.com DocumentRoot /srv/www/cbl/production/public_html/ ErrorLog /srv/www/cbl/production/logs/error.log CustomLog /srv/www/cbl/production/logs/access.log combined </VirtualHost> keeps overriding this file // filename: theccco.org <VirtualHost 74.207.237.23:80> SuexecUserGroup "#1010" "#1010" ServerName theccco.org ServerAlias www.theccco.org ServerAlias webmail.theccco.org ServerAlias admin.theccco.org DocumentRoot /home/theccco/public_html ErrorLog /var/log/virtualmin/theccco.org_error_log CustomLog /var/log/virtualmin/theccco.org_access_log combined ScriptAlias /cgi-bin/ /home/theccco/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/theccco/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks allow from all AllowOverride All </Directory> <Directory /home/theccco/cgi-bin> allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} =webmail.theccco.org RewriteRule ^(.*) https://theccco.org:20000/ [R] RewriteCond %{HTTP_HOST} =admin.theccco.org RewriteRule ^(.*) https://theccco.org:10000/ [R] Alias /dav /home/theccco/public_html <Location /dav> DAV On AuthType Basic AuthName theccco.org AuthUserFile /home/theccco/etc/dav.digest.passwd Require valid-user ForceType text/plain Satisfy All RewriteEngine off </Location> </VirtualHost> I tried a2ensite, a2dissite, and reloading I get this message * Reloading web server config apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Thu Apr 15 10:47:36 2010] [warn] NameVirtualHost 74.207.237.23:443 has no VirtualHosts Aside from that, I don't know what else could be wrong. Can anyone tell me what to do?

    Read the article

  • How to turn off Excel "Header Row" without losing data in it?

    - by Ken
    I've been sent an Excel spreadsheet with a weird first row. Some of the cells say "Column1", "Column2", etc., but I can't delete their contents. If I select the cell and hit backspace, it goes blank, but when I press return, it goes right back to saying "Column1". I found another answer here that suggested this could be caused by "Cell validation", but the validation window says "Any value", and also "show alert" (and I'm not seeing an alert), so I don't think that's it. The first row is white text on a blue background, if that means anything. The spreadsheet was sent to me in XLSX format, but I tried resaving as XLS and opening that, and it seems to make no difference. This is with the "ribbon" version of Excel (they got rid of the Help menu so I don't know how to see what version number it is!). Thanks! Update: The Excel online help says to use ribbon Home tab - Cells - Delete - ... to delete cells. When I select anything on the first row, this pop-up menu is dimmed. So maybe Excel doesn't think row 1 consists of "cells"? Though I don't know what else it would call them. Update 2: I found it, kind of. If I click the "Design" tab in the ribbon, then uncheck "Header Row", then first row becomes a normal row of cells again. Unfortunately, the contents disappear entirely. I want to delete a few cells, not all 50+! And if I copy the first row before turning off "Header Row", it disappears from the clipboard when I uncheck that. So I kind of know what mode it's stuck in, but not a good way out of it.

    Read the article

  • Tridion 2011 SP1 Core Service - expose to live server within PROD env

    - by Neil
    We have a requirement to allow our users to submit information about their "projects" - a small piece of text and single image they upload. Ultimately we'll have a listing page of user contributed projects that others can comment on and rate. We've decided to user Tridion's UGC for rating & comments site-wide for this first phase which has got me thinking - UGC is tied to Tridion published pages & components, if we want UGC on our user-submitted projects, they'll have to be created within Tridion as components themselves, not be sat in some custom db table? Is this where the Core Service could come in? My understanding is that the CD Web Service is for retrieval, not for interacting with the Content Manager. Is it OK (!) architecturally to expose the Core Service only to our live application servers so our backend .NET code can create "project components" that can be then be published by editors allowing them to be commented on? Everything sounds pretty neat and tidy apart from the "exposing Core Service to live servers" bit. Without this though I'd have to write a custom way to "transfer" it back over to the Content Manager - maybe like Audience Manager Sync works? Anyone done this before?

    Read the article

  • Copy UNC network path (not drive letter) for paths on mapped drives from Windows Explorer

    - by Ernest Mueller
    I frequently want to share network paths to files with other folks on my team via email or chat. We have a lot of mapped drives here, both ones we set up ourselves and ones set up by our IT overlords. What I'd like to be able to do is to copy the full real path (not the drive letter) from Windows Explorer to send to folks. Example: I have a file in my "Q:" drive, \\cartman\users\emueller, and I want to send a link to the file foo.doc therein to coworkers. When I copy the file path (shift+right click, "copy as path") it gets the file name "Q:\foo.doc". This is unhelpful to others, who would need to see \\cartman\users\emueller\foo.doc to be able to consume the link. In Explorer it clearly knows it - in the address bar I see "Computer - emueller (\\cartman\users) (Q:) -". Is there a way to say "hey man copy that path as text with the \\cartman\users\emueller not the Q: in it?" I know I could just set up mapped network locations instead of the mapped drives for the ones that I set up personally and avoid this problem, but most of the mapped drives like the "users" share come from our IT policy. I could just make a separate network location and then ignore my Q: drive but that's inconvenient (and they do it so they can move accounts across servers). Sure my emailed path might eventually break because I'm losing the drive letter indirection but that's OK with me.

    Read the article

  • Issues with VSFTPD / FTP on Linux Ubuntu server - Steps for Troubleshooting?

    - by jnolte
    I am dealing with an issue I am unclear on how to resolve and have been pulling my hair out for some time. I have been trying to configure an FTP user using the following (we use this same documentation on all servers) Install FTP Server apt-get install vsftpd Enable local_enable and write_enable to YES and anonymous user to NO in /etc/vsftpd.conf restart - service vsftpd restart - to allow changes to take place Add WordPress User for FTP access in WP Admin Create a fake shell for the user add "usr/sbin/nologin" to the bottom of the /etc/shells file Add a FTP user account useradd username -d /var/www/ -s /usr/sbin/nologin passwd username add these lines to the bottom of /etc/vsftpd.conf - userlist_file=/etc/vsftpd.userlist - userlist_enable=YES - userlist_deny=NO Add username to the list at top of /etc/vsftpd.userlist restart vsftpd "service vsftpd restart" make sure firewall is open for ftp "ufw allow ftp" allow modify the /var/www directory for username "chown -R /var/www I have also went through everything listed on this post and no luck. I am getting connection refused. Sorry for the poor text formatting above. I think you get the idea. This is something we do over and over and for some reason it is not cooperating here. Setup is Ubuntu 12.04LTS and VSFTPD v2.3.5 Thank you in advance.

    Read the article

  • Understanding tcptraceroute versus http response

    - by kojiro
    I'm debugging a web server that has a very high wait time before responding. The server itself is quite fast and has no load, so I strongly suspect a network problem. Basically, I make a web request: wget -O/dev/null http://hostname/ --2013-10-18 11:03:08-- http://hostname/ Resolving hostname... 10.9.211.129 Connecting to hostname|10.9.211.129|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: ‘/dev/null’ 2013-10-18 11:04:11 (88.0 KB/s) - ‘/dev/null’ saved [13641] So you see it took about a minute to give me the page, but it does give it to me with a 200 response. So I try a tcptraceroute to see what's up: $ sudo tcptraceroute hostname 80 Password: Selected device en2, address 192.168.113.74, port 54699 for outgoing packets Tracing the path to hostname (10.9.211.129) on TCP port 80 (http), 30 hops max 1 192.168.113.1 0.842 ms 2.216 ms 2.130 ms 2 10.141.12.77 0.707 ms 0.767 ms 0.738 ms 3 10.141.12.33 1.227 ms 1.012 ms 1.120 ms 4 10.141.3.107 0.372 ms 0.305 ms 0.368 ms 5 12.112.4.41 6.688 ms 6.514 ms 6.467 ms 6 cr84.phlpa.ip.att.net (12.122.107.214) 19.892 ms 18.814 ms 15.804 ms 7 cr2.phlpa.ip.att.net (12.122.107.117) 17.554 ms 15.693 ms 16.122 ms 8 cr1.wswdc.ip.att.net (12.122.4.54) 15.838 ms 15.353 ms 15.511 ms 9 cr83.wswdc.ip.att.net (12.123.10.110) 17.451 ms 15.183 ms 16.198 ms 10 12.84.5.93 9.982 ms 9.817 ms 9.784 ms 11 12.84.5.94 14.587 ms 14.301 ms 14.238 ms 12 10.141.3.209 13.870 ms 13.845 ms 13.696 ms 13 * * * … 30 * * * I tried it again with 100 hops, just to be sure – the packets never get there. So how is it that the server does respond to requests via http, even after a minute? Shouldn't all requests just die? I'm not sure how to proceed debugging why this server is slow (as opposed to why it responds at all).

    Read the article

  • Is this "cache administrator" error my server's problem?

    - by Eoin
    Hey, I have a CentOS VPS running Apache with a phpBB installation. One specific user has received errors when posting a message or logging in to the forum. The following issue has arisen in parallel to installing nginx, which serves only the static files of my site. Not sure if this is only coincidence. Furthermore, my setup uses redirects (in some cases, double-redirects) to point the user to a different virtual folder. So, the forum is seen to be at /translation/ but the actual files are found in /phpbb/. I'm at a loss as to what may be the underlying issue. My server? The person's ISP? She has tested both at home and at work, with similar issues. While trying to process the request: GET /phpbb/index.php?sid=f62c927e7eb8f1d60a92dcc6fd918112 HTTP/1.1 Host: www.irishgaelictranslator.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-za Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.irishgaelictranslator.com/phpbb/ucp.php?mode=login Cookie: phpbb3_cipi4_u=96645; phpbb3_cipi4_k=; phpbb3_cipi4_sid=f62c927e7eb8f1d60a92dcc6fd918112; __utma=153470688.1232378553.1294664234.1294664234.1294664234.1; __utmb=153470688.9.10.1294664234; __utmc=153470688; __utmz=153470688.1294664235.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); style_cookie=null The following error was encountered: Invalid Response The HTTP Response message received from the contacted server could not be understood or was otherwise malformed. Please contact the site operator. Your cache administrator may be able to provide you with more details about the exact nature of the problem if needed.

    Read the article

  • How to display escaped characters in tmux status bar

    - by walrus
    i am running tmux from a tty on an embedded linux device. (NOT a terminal emulator) because the screen is rather small, i want to add some "icons" to the tmux status bar. to achieve this, i have simply created a font with the appropriate glyphs for things like battery, or wifi. i can load the font, and display the characters with calls that use an escape to the line drawing characters like so: echo -e "\xe\234\xf" \xe escapes me into line drawing character mode, \234 is my created character, and \xf returns me to normal character mode so my terminal doesnt start getting goofy. this works perfectly if i enter the command at the terminal whether tmux is started or not. the issue arises if i then try to use it in my ~/.tmux.conf file for the status bar. i currently have a line like this: set -g status-right "#(echo -e "\xe\234\xf") #(/script/to/output/powerlevel) this simply outputs \xe\234\xf powerlevel this goes the same if i try printf over echo. this is the output i would expect to get on the terminal if i made the call without passing -e to echo, or without enclosing the statement with quotes. i then decided to wrap the calls to the echo or printf in a shell script. again, the script works when called from the terminal, but not in tmux's status bar. now i get the unprintable character "?" instead of my icon, like this: ? powerlevel this is what i would expect if i did not use the line drawing escapes previously mentioned above, or if i tried to copy and paste the character as text using tmux. in addition, the calling of these character scripts screws up the rest of my status-right, as the clock has about 6 digits for minutes when it is called (though it correctly only updates two of them). how can i make tmux respect the escape characters? any help or insight is greatly appreciated.

    Read the article

  • Can a folder on a NAS be made available as a physical drive in VMWare?

    - by asbjornu
    We are currently in the process of moving from a single web server to two load balanced web servers and are facing some challenges we don't quite know how to fix. One of these is that the current single server hosts applications that write stuff to disk. The applications running on the server expects that when something is written to disk it later will in fact exist, so it's important that this premise is fulfilled with the dual server architecture as well. The dual server setup is a couple of VMWare instances with Windows Server 2008 R2 as the guest operating system. Out of the box, these instances does not share any kind of file system, so just moving the applications over would make them break since one instance would write something to the file system that doesn't exist on the other. Thus we need to share a file system between the two virtual servers. Our host has proposed to create a network share on a SAN and map this share individually on each virtual machine. This doesn't work too well due to NTFS permissions, etc., because the share needs to be accessed by several independent web applications that won't even be in the same application pool. The only solution that kind of works is to hard code an "identity" for each web application into its web.config file, but this means password in clear text which doesn't sit well with me. Since the servers are virtual, I'm thinknig: Wouldn't it be possible to make a NAS area available as a physical disk in the gues operating system somehow? Since VMWare has full control of the virtual hardware, you'd think it would be able to "fake" a local hard drive in the virtual machine that in reality is a folder on a NAS, but so far I haven't found anything that states how and if this is possible. So I have to ask the wonderful Server Fault community: Can a folder on a NAS be made available as a physical drive (typical D:) in both of the virtual machines?

    Read the article

  • Firefox cannot render icons from Font Awesome webfont set

    - by ADTC
    In Firefox (Windows 7), icons and glyphs that are called from the Font Awesome package do not render properly. An example of this can be seen on the Khan Academy website. Below the video the icons are shown as boxes with hex codes in them. This means that it isn't getting downloaded by Firefox. How it appears on Chrome (Windows 7), Safari (Mac OS X) and Stainless (Mac OS X): I found this question on Stack Overflow that may explain why this happens -- the CSS does use single quotes to enclose the font's src location. However, I don't have any write access to Khan Academy servers so I can't modify the actual website. I want to know if this can be fixed in Firefox, and how. I can run Greasemonkey scripts if that would help. I've already tried manually downloading the font and adding it to Windows' Fonts folder but this does not help. For reference, the CSS that sets this font up (not processed properly by Firefox) is: @font-face { font-family:'FontAwesome'; src:url('./fontawesome-webfont.eot'); src:url('./fontawesome-webfont.eot?#iefix') format('embedded-opentype'), url('./fontawesome-webfont.woff') format('woff'), url('./fontawesome-webfont.ttf') format('truetype'), url('./fontawesome-webfont.svg#FontAwesome') format('svg'); font-weight:normal; font-style:normal } [class^="icon-"]:before, [class*=" icon-"]:before { font-family:FontAwesome; font-weight:normal; font-style:normal; display:inline-block; text-decoration:inherit }

    Read the article

  • How does one skip "Windows did not shut down successfully" in Win7-64?

    - by XenonofArcticus
    Migrating an app from an expensive and unreliable dedicated embedded x86 box running WinXP-embedded to COTS hardware (Dell E6410 laptop) running normal Win7-64. At this time, it's not feasible to deploy using Windows 7 embedded. The problem is, that the system is still sort of "embedded". The power could shut off at virtually any time without prior warning. We've stripped the OS down and removed the battery capability so that it will power down as desired. The app never writes to the disk, so it's not like we're going to corrupt anything terribly. The system is essentially idle after our app is up and running (with the exception of some computation, graphics, and TCP/IP and serial communications) so the OS enters a pretty stable state rather quickly. After a power-loss however, it rightly complains that Windows did not shut down successfully and presents the user with the Windows Error Recovery text screen. If left alone, it does eventually move on booting just fine, but we'd like to skip that step if possible. WinXP-embedded is designed to do this automatically, so I know it's possible. I've looked at the Kernel Switches but I didn't see anything documented for "Skip Windows Error Recovery". I've also read extensively on the startup process: http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/windows-nt-6-boot-process.html I know I can disable the auto chkdsk in the registry, but that's not the same thing either. So, how do I streamline the boot process to not hassle the user about a situation that will be the regular normal situation?

    Read the article

  • Varnish returning 503, FetchError (could not get storage)

    - by Archan
    On the current setup we're running into a problem with Varnish, we're running a CentOS 5.7 x86_64 xenpv, with Cpanel WHM, hosted at VPS.net. Sometimes we will recieve a Guru Meditation from Varnish, and when we look in the varnishlog with the following command varnishlog -d -c -m TxStatus:503 it returns output similar to the following: 15 VCL_call c recv 15 VCL_acl c NO_MATCH devs 15 VCL_return c pass 15 VCL_call c hash 15 Hash c **** 15 Hash c ************* 15 VCL_return c hash 15 VCL_call c pass pass 15 Backend c 12 default default 15 TTL c 1835862523 RFC 0 -1 -1 1332454056 0 1332454055 375007920 0 15 VCL_call c fetch hit_for_pass 15 ObjProtocol c HTTP/1.1 15 ObjResponse c OK 15 ObjHeader c Date: Thu, 22 Mar 2012 22:07:35 GMT 15 ObjHeader c Server: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 mod_fcgid/2.3.6 15 ObjHeader c X-Powered-By: PHP/5.3.9 15 ObjHeader c Expires: Thu, 19 Nov 1981 08:52:00 GMT 15 ObjHeader c Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 ObjHeader c Pragma: no-cache 15 ObjHeader c Content-Type: text/html; charset=utf-8 15 ObjHeader c X-Cacheable: NO:Cache-Control=private 15 FetchError c chunked read_error: 12 (Could not get storage) 15 VCL_call c error deliver 15 VCL_call c deliver deliver As far as I have could gather, we could try increasing the nuke_limit, but currently we have a nuke_limit of 500, and when running varnishstat -1 -f n_lru_nuked we "only" get a total of 1031, even though we have seen the error happen on several pages. When we then run top to see how much memory Varnish is using, it only shows that it is using 763m, although we've set it to be allowed to use 1200m. Any ideas of what the problem can be?

    Read the article

  • What is the correct way to move a file?

    - by Joe McDonald
    We had an issue at my work where I cut and pasted some files. Immediately when I did it, a ton of files were lost. I've been working in IT for 10+ years. I know how to cut and paste a file. Well, when it went up to my managers as to why the files were lost, they deemed it to my cut and paste that caused all the problems and asked why in the world someone as knowledgeable as me would ever cut and paste a file, and didn't I know that was totally the wrong way to move a file? The correct way to move a file is to drag the file. When cutting and pasting, it moves that 1+ GB file (on the server) to the clipboard (on my PC), which, obviously, will cause problems. Dragging a file never hits the clipboard. Be honest, I don't believe that for a minute. I believe when I cut and paste text, it goes to the clipboard. I've seen it in the old versions of windows. But when right clicking on 100+ files that equals 1+ GB, I can't believe that all that data is copied immediately out of whatever share I'm on at the server across my wireless on my laptop to my local clipboard to just go back to the server to another share. It seems they would build some logic in the server OS or my local OS (more likely my local OS) that would say when copying files, don't perform the move action until I click paste and if the files are staying local to where they were before, just move them. So, who's right?

    Read the article

  • Connecting a laptop to a TV via HDMI

    - by Madmartigan
    I just bought a new Dell XPS17 laptop (Win7) that only has HDMI output. My last 2 laptops had VGA, which I used to connect to my Sony Bravia 32" TV with no issues, but with the HDMI it's been quite a headache. Drivers for display adapters have been updated to the latest versions: Intel(R) HD Graphics Family NVIDIA GeForce GT 550M I went to a store and plugged in to 4 different TVs from different manufacturers. A sales rep and I spent about 30 minutes being baffled by the results (which are the same as my current TV): Extreme buggy behavior in the Nvidia and Windows display/resolution control panel Can not extend or duplicate displays, can only select one Third and fourth output devices "randomly" detected by the Windows control panel Could not get the screen to fit the output (edges cut off on all sides by about a half inch) Resolution and colors less than perfect. Artifacts around text. Display "randomly" cuts out Defaults to TV output only when plugged in Can not change resolution on either device when connected No audio from the TV Plugged in to 3 monitors from different manufacturers: Defaults to duplicated displays when plugged in Everything works perfectly So far, four people have gone through all the settings in the latop with no luck. I had similar, but not exactly matching results with a different laptop. I'm using the Sony Bravia currently at home, but in order to get it to work I have to turn on the laptop, wait until the display shows up on it, close the lid, then cycle through each output channel on the TV until I come back around to the HDMI port again, but still I have the symptoms described above. However: Once in a while, it just works. Sometimes, seemingly randomly, the output fits the screen perfectly. Sometimes the audio comes through the speakers too, but not always. Usually my screen saver "Mystify" will come up with a message that it cannot be displayed due to a limitation of the video card, but then sometimes it works fine. These 3 things seem to be independent of each other and don't always happen together. So, is there any way to get the laptop to output correctly to a TV, or is it just not meant to be?

    Read the article

  • How to use SSH Public Key with PuTTY to connect to a Linux machine

    - by ysap
    I am trying to set a public SSH key connection from a Windows 7 machine to a Red-Hat Linux machine. The ultimate purpose is to use pscp (PuTTY's version of scp) from the command terminal w/o the need to type password repetitively. Following PuTTY's documentation and other online sources, I used PuTTYgen to generate a key pair. I then copied the generated public key to a ~/.ssh/authorized_keys file on the Linux machine (as far as I can tell, it runs OpenSSH server). To check the connection, I run PuTTY and set the username and private key file in the appropriate places in its GUI. However, when trying to connect using PuTTY's SSH, the connection uses the preset username, but I get an error message of "Server refused our key" and a prompt for the password. I then tried to copy-paste the public key text from PuTTYgen's GUI to the authorized_keys file, but it did not work either. How should I set up a public key connection form Win 7 to Linux? How do I use this with pscp (rather than PuTTY's ssh)?

    Read the article

  • Saving a file in a CSV type in Excel always removes the BOM

    - by rickp
    I've been trying to find a reasonable solution/explanation (unsuccessfully) to find out why Excel defaults to removing the BOM when saving a file to the CSV type. Please forgive me if you find this a duplicate of this question. This handles reading CSV files with non-ASCII encoding, but it doesn't cover saving the file back out (which is where the biggest issue lies). Here is my current situation (which I'm going to gather is common among localized software dealing with Unicode characters and a CSV format): We export data to a CSV format using UTF-16LE, ensuring the BOM is set (0xFFFE). We validate after the file is generated with a Hex editor to ensure it was set correctly. Open the file in Excel (for this example we're exporting Japanese characters) and witness that Excel handles loading the file with the correct encoding. Attempts to save this file will prompt you with a warning message indicating that the file may contain features that may not be compatible with Unicode encoding, but asks if you'd like to save anyway. If you select the Save As dialog, it will immediately ask you to save the file as "Unicode Text" rather than CSV. If you select the "CSV" extension and save the file it removes the BOM (obviously along with all the Japanese characters). Why would this happen? Is there a solution to this problem, or is this a known 'bug'/limitation of Excel? Additionally (as a side issue) it appears that Excel, when loading UTF-16LE encoded CSV files, only uses TAB delimiters. Again, is this another known 'bug'/limitation of Excel?

    Read the article

< Previous Page | 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168  | Next Page >