Search Results

Search found 4783 results on 192 pages for 'a txt'.

Page 7/192 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Cannot save properly the source of .html file containing Russian letters as .txt

    - by brilliant
    When I save the source of this page of a Russian website: http://www.mail.ru/ as a .txt file, all Russian letters turn into Chinese characters (I am working on a Chinese computer at the moment), but when I save another page of another Russian website: http://starling.rinet.ru/cgi-bin/response.cgi?root=/usr/local/share/starling/morpho&morpho=0&basename=\usr\local\share\starling\morpho\ozhegov\ozhegov&first=4001 also as a .txt file, all Russian letters are saved in it as the are. Why is it so?

    Read the article

  • Preventing Thunderbird to add .txt to attachments on open

    - by Horcrux7
    How can I prevent Thunderbird to add the extension .txt to a file when open the attachment. I have the problem with .patch files which I want look with notepad++. The problem is that notepad++ does not detect the right formating for the file because the extension is .txt. If I drag the file on the desktop an double click all is working. Why change Thunderbird the file name on opening? I am working on Windows 7.

    Read the article

  • make programmer notepad default files as .txt?

    - by acidzombie24
    I am using http://www.pnotepad.org/ (i wouldnt mind switching to something else if it lightweight and has most/all features i like which i'll check on a app by app basis) When i create a new tab/file and save it unless i write .txt i get a file with no extention. Which makes it hard to open since i cant double click it (i dont think i can tell win7 to set a default app for files with no extension) How do i make pnotepad save with a .txt when non are specified?

    Read the article

  • What bots are really worth letting onto a site?

    - by blunders
    Having written a number of bots, and seen the massive amounts of random bots that happen to crawl a site, I am wondering if the goal of the site allowing bots is for the potential for the bot to send real traffic back to the site if there is any reason to allow bots that are not known to be sending real traffic back, and how to spot these "good" bots; based on how they ID themselves, IPs they come from, behaviors, etc.

    Read the article

  • How to get search engines to properly index an ajax driven search page

    - by Redtopia
    I have an ajax-driven search page that will allow users to search through a large collection of records. Each search result points to index.php?id=xyz (where xyz is the id of the record). The initial view does not have any records listed, and there is no interface that allows you to browse through all records. You can only conduct a search. How do I build the page so that spiders can crawl each record? Or is there another way (outside of this specific search page) that will allow me to point spiders to a list of all records. FYI, the collection is rather large, so dumping links to every record in a single request is not a workable solution. Outputting the records must be done in multiple requests. Each record can be viewed via a single page (eg "record.php?id=xyz"). I would like all the records indexed without anything indexed from the sitemap that shows where the records exist, for example: <a href="/result.php?id=record1">Record 1</a> <a href="/result.php?id=record2">Record 2</a> <a href="/result.php?id=record3">Record 3</a> <a href="/seo.php?page=2">next</a> Assuming this is the correct approach, I have these questions: How would the search engines find the crawl page? Is it possible to prevent the search engines from indexing the words "Record 1", etc. and "next"? Can I output only the links? Or maybe something like:  

    Read the article

  • Uploading a non-finished website

    - by Daniel
    I have a pretty basic question. I developed a neet little website wich I'm ready to upload, but still needs a bit of work. The designer needs the html to do his work so the website needs to be uploaded. Besides that, I have to correct a couple details, do the friendly-urls, etc. What's the best way to set up the webpage in the definitive hosting with the defintive domain, blocking it to any unknown users and without affecting affecting SEO and those kind of things. If I were to just upload it, the non-definitive website might be crawled by a SE-bot. Thanks!

    Read the article

  • What dangers await if I block non-standard, non-major-usa search engine bots from my USA only website?

    - by Ryan
    I noticed tons of bandwidth being used by non-USA search engine bots, so I began blocking them in an effort to save bandwidth and cpu cycles for actual users and the search engines they come from (Google, Bing, Yahoo, Ask, etc.). Other than potentially losing some international traffic (which isn't really important to us since all of our content is very USA-centric), what additional dangers should I be concerned about? I'm using a modified version of Jeff Starr's User Agent Blocklist

    Read the article

  • Should I set NOINDEX header for my JS, CSS and image files?

    - by Yoga
    Are there any harms if my site send NOINDEX headers for all my static assets? For image files, I refer to those valueless, e.g. background images, button images, etc. Update: more background information I have this concern is since recent Google said they also execute JS and they might fetch content via Ajax. So, for example, if I send noindex for my jQuery script, so Google would not be able to use them to load Ajax, I suppose it is not good for my site's SEO, right?

    Read the article

  • Google Webmaster Tools robots test not working

    - by tracy_snap
    Within Webmaster Tools I have supplied my test content: User-agent: * Disallow:/admin/ Disallow: /tag/ When I specify the URL to test against, for example: http://www.site.com/tag/ It gives me this result: "Allowed: Detected as a directory; specific files may have different restrictions" As far as I know I have set this up correctly, shouldn't Google be saying that the /tag/ directory is "disallowed"?

    Read the article

  • How can I allow search engines to index my invite only website in ruby on rails?

    - by tstyle
    I have a ruby on rails website that will be in invite-only mode for the next couple of months. Currently I have it set up so visits to any page performs an authentication: before_filter :authenticate, :except => [:beta] //authenticate checks for a logged in user But the webpage has a lot of content that I would like to see indexed by search engines, and I was wondering if there's an easy way to allow crawlers to do their work? I am not very knowledgable on SEO related stuff at all, so sorry if this is an suboptimal way to phrase the question.

    Read the article

  • SEO consideration for duplicate sites

    - by Malk
    I am building a brochure-ware website for a company that sells products all across the world. They need the site to ask the user what region they are in before using the site; there are 5 regions. This is because there are different products offered to different regions and each region may or may not want to customize their own content. However, at launch and likely forever, most of the pages will be the exact same minus what is listed in the footer and in the product selection menu. My question is how should I structure the sitemap for this site for best SEO? Should I be concerned with duplicate content penalties and/or cannibalizing the site's presence on the SERP? Some considerations: The client wants to be able to print links directly to regional specific content bypassing any prompt for the user to select a region (to ensure they land on the target page). The client cannot have a 'default' region so the user must have a region specified "Clean" urls are important, but there is wiggle room The client does not want each region to have its own domain There will be a link on the page to allow users to specify a different region The client is not concerned with localization ...at this time Some products are available in multiple regions A quick list of options I am considering: www.site.com/region/page region.site.com/page www.site.com/page?region (no cookie, pages require the parameter. If visited without; the user must select a region) www.site.com/page (using cookie and a splash screen if needed; could pass parameter in to set the region for direct linking) Thanks in advance for your advice.

    Read the article

  • Does GoogleBot respect User-agent: *

    - by rkulla
    I blocked a page in robots.txt under User-agent: *, and tried to do a manual removal of that URL from Google's cache in the webmasters tools. Google said it wasn't being blocked in my robots.txt, so I then blocked it specifically under User-agent: GoogleBot and tried removing it again and this time it worked. Does that mean Google doesn't respect User-agent: * or what?

    Read the article

  • [Disallow: /index.php] seems to block /my-beautiful-sef-url-123

    - by Jaroslav Záruba
    Hello I have robots.txt that looks like this: User-agent: * Disallow: /system/ Disallow: /admin/ Disallow: /index.php The obvious goal has been to prevent all the ugly URLs from being indexed, as they all begin with "/index.php". But for some reason all URLs like /my-beautiful-sef-url-123 are listed under Crawl errors in Google Webmaster Tools with "URL restricted by robots.txt". (When I test such URL it yields Allowed for both Googlebot and Googlebot-Mobile.) Can anyone help please?

    Read the article

  • How to create a default association for files with no extension

    - by acidzombie24
    I am using http://www.pnotepad.org/ (i wouldnt mind switching to something else if it lightweight and has most/all features i like which i'll check on a app by app basis) When i create a new tab/file and save it unless i write .txt i get a file with no extention. Which makes it hard to open since i cant double click it (i dont think i can tell win7 to set a default app for files with no extension) How do i make pnotepad save with a .txt when non are specified?

    Read the article

  • OSX: The item XYZ.txt~ can’t be moved to the Trash because it can’t be deleted

    - by dsg
    I'm trying to delete a file under OSX Lion, but can't. I get the following message: The item XYZ.txt~ can’t be moved to the Trash because it can’t be deleted. Here's what I've tried: select file and press COMMAND + DELETE (I get the message above.) renaming the file in finder (There is no option to rename the file.) sudo ls -a in a terminal (The file does not appear.) sudo rm XYZ.txt~ (I get "No such file or directory".) How do I remove this file? EDIT The file went away after restarting. My guess is that it was a glitch in finder.

    Read the article

  • txt file descriptor in lsof

    - by wfaulk
    In my experience, files that have the file descriptor of txt in lsof output are the executable file itself and shared objects. The lsof man page says that it means "program text (code and data)". While debugging a problem, I found a large number of data files (specifically, ElasticSearch database index files) that lsof reported as txt. These are definitely not executable files. The process was ElasticSearch itself, which is a java process, if that helps point someone in the right direction. I want to understand how this process is opening and using these files that gets it to be reported in this way. I'm trying to understand some memory utilization, and I suspect that these open files are related to some metrics I'm seeing in some way. The system is Solaris 10 x86.

    Read the article

  • Enabled boot logging option in Win XP doesn't produce ntbtlog.txt file

    - by Xolstice
    I have tried enabling the boot logging option for Windows XP Pro using the F8 and also boot.ini method, but it does not seem to be creating the expected ntbtlog.txt file. I'm doing this to help me solve a problem where the Avast antivirus service is not starting up - it keeps complaining with the message: "Error 1068: The dependency service or group failed to start". I seem to notice that the file is produced on a system where the Avast antivirus is starting up without any problems. Can anybody suggest why the ntbtlog.txt file is not being created?

    Read the article

  • How to download images using wget from a txt file that contains links

    - by SwanC
    I can download images using wget if I download from a website. But I have several links and I have saved them in a text file. For example: wget -r -A.jpg -np www.fragrancenet.com There are so many pictures on this website. I have saved the links for the particular pictures I want: www.fragrancenet.com/images1 www.fragrancenet.com/images2 www.fragrancenet.com/images3 The links are saved in a text file named images.txt in my computer. How can I download the links in the images.txt text file using wget?

    Read the article

  • Using a .txt file Resource in VB

    - by Tony C
    Right now i have a line of code, in vb, that calls a text file, like this: Dim fileReader As String fileReader = My.Computer.FileSystem.ReadAllText("data5.txt") data5.txt is a resource in my application, however the application doesn't run because it can't find data5.txt. I'm pretty sure there is another code for finding a .txt file in the resource that i'm overlooking, but i can't seem to figure it out. So does anyone know of a simple fix for this? or maybe another whole new line of code? Thanks in advance!

    Read the article

  • Get standard application for txt files (.NET)

    - by iDog
    Possible Duplicate: Finding the default application for opening a particular file type on Windows Hello, in my application I want to open a text file, which has no .txt extension. Is there any way to get the standard application for .txt files in .NET (C#)? Sure I could use "notepad", but there might be some people (like me), who prefer another (their standard) editor. Thank you very much. Edit: The registry key "[HKEY_CLASSES_ROOT]\txtfile\shell\open\command" references notepad, but that's not my standard app for txt files. How do I get the my current standard app for .txt?

    Read the article

  • Batch to copy and replace a txt file from one server to another

    - by Sunny
    I have two servers, server1 and server2 on same network but require username and password to be mapped. server1 has a text file as C:\Users\output.txt. I want to create and schedule a batch script on server1, which should copy and replace output.txt file from server1 to server2 at path E:\data\output.txt on daily basis. I don't want to map server2 manually every time I start my computer nor do I want to enter my username and password each time. I am using following commands in a batch, but not working; net use C: \\server2\E:\data server2password /user:server2domain\server2username /savecred /p:yes xcopy C:\Users\output.txt E:\data\

    Read the article

  • Vmware Workstation downloads as a txt file?

    - by George Mauer
    I just went to the vmware website because I want to try workstation over virtualbox. I signed up for a workstation trial and clicked download on the 64bit linux version. What downloaded is a 320 megabyte txt file VMware-Workstation-Full-8.0.2-591240.x86_64.txt What gives? Is anyone familiar with this pattern of delivering software? How do I run it? Here is the beginning of that file: #!/usr/bin/env bash # # VMware Installer Launcher # # This is the executable stub to check if the VMware Installer Service # is installed and if so, launch it. If it is not installed, the # attached payload is extracted, the VMIS is installed, and the VMIS # is launched to install the bundle as normal. # Architecture this bundle was built for (x86 or x64) ARCH=x64 if [ -z "$BASH" ]; then # $- expands to the current options so things like -x get passed through if [ ! -z "$-" ]; then opts="-$-" fi # dash flips out of $opts is quoted, so don't. exec /usr/bin/env bash $opts "$0" "$@" echo "Unable to restart with bash shell" exit 1 fi set -e ETCDIR=/etc/vmware-installer OLDETCDIR="/etc/vmware" ### Offsets ### # These are offsets that are later used relative to EOF. FOOTER_SIZE=52 # This won't work with non-GNU stat. FILE_SIZE=`stat --format "%s" "$0"` offset=$(($FILE_SIZE - 4)) MAGIC_OFFSET=$offset offset=$(($offset - 4)) CHECKSUM_OFFSET=$offset offset=$(($offset - 4)) VERSION_OFFSET=$offset offset=$(($offset - 4)) PREPAYLOAD_OFFSET=$offset

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >