Search Results

Search found 36925 results on 1477 pages for 'large xml document'.

Page 623/1477 | < Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >

  • Quick way to bypass proxy with DownThemAll

    - by endolith
    I've been using an SSH proxy to my home network to encrypt my internet surfing, which is fine. But the connection is much slower than the direct one, and when I'm downloading large files I'd rather go around the proxy. Currently, I send it to Downthemall, go to FoxyProxy and disable the proxy, cancel and resume the download, then when it's started go back to FoxyProxy and re-enable it. Is there any way I can just get DownThemAll stuff to skip the foxyproxy?

    Read the article

  • Formula-based Excel page headers

    - by Jake Krohn
    I'm using the "Rows to repeat at top" function in Excel's "Page Setup" dialog to ensure that a multi-row header block appears on every printed page of my worksheet. However, I'd like to be able to change certain bits of the header based on the content of the current page. I would simply like to display the value of one cell in the first row that is printed on the page. If this is my header: Section: xx And the data looks like this (columns are Section and Name): 1 Foo 1 Bar 2 Baz I want the "xx" in the header to be "1". If, further down on the next page, the value in the Section column is "3", I want that printed in the header of the next page. I originally thought that using the "OFFSET" function might help, e.g. ="Section: "&OFFSET(A2, 1, 0) But it only shows the offset from the original placement of the header, thus only working on page 1. The end document is a PDF, so right now I'm able to go back in with the "TouchUp Text Tool" in Acrobat and add the numbers page by page. But it gets to be a tedious process with 70+ page reports. Anyone have any better ideas that don't require me mucking up the original Excel document with inserted headers every N lines? This is Excel 2008 for Mac, if it makes a difference.

    Read the article

  • How to take search query and append modifers to the end of it

    - by Kimber
    This is a greasemonkey question. What I'm trying to do is modify an old google discussions script. What were wanting to do is be able to take the google search query and add modifiers to the end of it. Like this: search query: "superuser" modifiers: inurl:greasemonkey+question end result: "superuser" inurl:greasemonkey+question The old script creates a new div within the "hdtb_more_mn" element which is where you get the new discussions tab. However, since the "tbm=dsc" option to do a discussion search has died, this script no longer works. Hence the need to add modifiers to your searches. I tried to edit the script, but it appends the modifiers to the end of the url which includes "&client=firefox-a&hs=8uS&rls=org.mozilla:en-US:official". This means you're also searching for the above as well as your query, which doesn't work. I would like to be able to append the modifiers @ the end of the search querty, rather than the whole URL. I'm just not sure how to code it to where it adds the below "&tbm=" stuff within "discussionDiv.innerHTML" to the end of the query. The google search id seems to be, "gbqfq" for the search box, but I'm not sure how to add this id. Here is the old script // ==UserScript== // @name Add Back Google Discussions // @version 1.4 // @description Adds back the Discussion filters to Google Search // @include *://*.google.tld/search* // ==/UserScript== var url = location.href; if (url.indexOf('tbm=dsc') < 0) addFilterType('dsc', 'Discussions'); function addFilterType(val, name) { var searchType = document.getElementById('hdtb_more_mn'); var discussionDiv = document.createElement('DIV'); discussionDiv.className = 'hdtb_mitem'; discussionDiv.innerHTML = '<a class="q qs" href="'+ (url.replace(/&tbm=[^&]*/g,'') + '&tbm=' + val) +'">'+name+'</a>'; searchType.innerHTML += discussionDiv.outerHTML; } Thanks for any help, or suggestions on who to ask. Google Chrome has an extension for discussion searches, but FF doesn't seem to have one as of yet, which is why I'm trying to modify the above.

    Read the article

  • How can I prepare a TortoiseSVN installer to use the serf HTTP library instead of neon?

    - by Sam Johnson
    I'm going to be distributing instructions on how to access our new Subversion repository with TortoiseSVN. Because it's hosted on Windows, and we have some large files in the repository, we have to use the Serf HTTP library instead of neon. This is normally specified by manually editing the Subversion "servers" file on the client machine and adding the line http-library=serf Is there a way I can customize the TortoiseSVN installer to do this automatically? I'm just trying to get it up and running as easy as possible for our new SVN users.

    Read the article

  • Is it possable to invert the image output on a computer for a rear projection screen?

    - by Faken
    Hello everyone, Is there a way to flip/invert/mirror a video out on a computer with a on board Intel video card? I'm making a large rear projection screen and i need to invert the image to project properly. One of my projectors has a function that automatically inverts the image, but the other one may or may not (likely not) and i need two projectors to drive the system so I'm hoping to do the inversion on the computer side before projecting.

    Read the article

  • Broken pipe on nginx

    - by schneck
    Hi there, I set up php/fastcgi with nginx and now I want to upload very large files via a java-applet. After about 30 seconds, the applet reports a "Broken pipe". In the server logs, i find nothing. I changed any setting in the php.ini (max_execution_time, max_input_time, memory_limit, post_max_size) to very high values, but nothing helps. Any idea?

    Read the article

  • Infotips for Word Documents in Windows XP in Network Drives

    - by Knight Samar
    Hi, MS Word 2007 files have a property page for entering details like summary and title. This is displayed when you hover over the documents on Desktop. Now on my Windows XP SP2 computer, inside Windows Explorer, it shows the special properties for all such files from Desktop, but not from the Network Drives. This is a big problem when I have a large collection of Word documents all in one folder. How can I display these special properties (infotips) for documents in my network drives ? Thanks :)

    Read the article

  • How to get the best LINPACK result and conquer the Top500?

    - by knweiss
    Given a large Linux HPC cluster with hundreds/thousands of nodes. What are your best practices to get the best possible LINPACK benchmark (HPL) result to submit for the Top500 supercomputer list? To give you an idea what kind of answers I would appreciate here are some sub-questions (with links): How to you tune the parameters (N, NB, P, Q, memory-alignment, etc) for the HPL.dat file (without spending too much time trying each possible permutation - esp with large problem sizes N)? Are there any Top500 submission rules to be aware of? What is allowed, what isn't? Which MPI product, which version? Does it make a difference? Any special host order in your MPI machine file? Do you use CPU pinning? How to you configure your interconnect? Which interconnect? Which BLAS package do you use for which CPU model? (Intel MKL, AMD ACML, GotoBLAS2, etc.) How do you prepare for the big run (on all nodes)? Start with small runs on a subset of nodes and then scale up? Is it really necessary to run LINPACK with a big run on all of the nodes (or is extrapolation allowed)? How do you optimize for the latest Intel/AMD CPUs? Hyperthreading? NUMA? Is it worth it to recompile the software stack or do you use precompiled binaries? Which settings? Which compiler optimizations, which compiler? (What about profile-based compilation?) How to get the best result given only a limited amount of time to do the benchmark run? (You can block a huge cluster forever) How do you prepare the individual nodes (stopping system daemons, freeing memory, etc)? How do you deal with hardware faults (ruining a huge run)? Are there any must-read documents or websites about this topic? E.g. I would love to hear about some background stories of some of the current Top500 systems and how they did their LINPACK benchmark. I deliberately don't want to mention concrete hardware details or discuss hardware recommendations because I don't want to limit the answers. However, feel free to mention hints e.g. for specific CPU models.

    Read the article

  • Determine the percentage of a file that has been ftp'd from client to server

    - by klwillie
    I want to ftp a large file from a Windows client to a Windows server, using their IP addresses. This is on an internet independent network. While the file is transferring, I would like to determine how many bytes have been received by the server. I then would like to use this information to determine in real-time the percentage of the file that has been transferred to the server. Any recommendations as to the ftp command syntax and C# code to achieve this?

    Read the article

  • What's a good Text Expander software for windows?

    - by chris.w.mclean
    What's a good text expander out there for windows? Ideally it needs to work w/ MS Word, needs to be configurable in how it gets triggered, (i.e. the string hdt when followed by a space gets transformed into Help Desk Ticket, but hdt gets ignored). And needs to have an import option where a large list of tags & expansions can be loaded. Plugins for UltraEdit/Notepad++ would also be acceptable.

    Read the article

  • How to convert a really big HTML file to PDF in Windows

    - by PeterStrange
    We have a few really large HTML files (60-100 MB) that we cannot convert to PDF with any reliability. Adobe Acrobat 9 crashes - hits the 2GB limit for applications. Open Office converts, but removes some of the anchors (). ActivePDF webgrabber crashes. Is using a 64 bit situation an option for this type of thing? I see a bunch of options out there, but can they do better than Adobe Acrobat 9 itself?

    Read the article

  • Need for page file with 12 GB RAM

    - by MartinStettner
    Hi, I recently got my new PC with 12 GB RAM (running Windows 7 64bit). The default installation suggests a 12 GB page file on the system drive (which I think is both inefficient and expensive on a SSD drive...) I'm wondering if I need any virtual memory at all, 12 GB being more than I had on my previous machine including the page file (I had 3GB RAM + 3GB pagefile). Thanks Martin EDIT As mokubai pointed out, the question is pretty much answered in Windows 7 pagefile size with large RAM and SSD

    Read the article

  • iTunes copy just metadata (song and album ratings, playlists) from iPod

    - by Jared Updike
    I have an iPod touch that I synched with my Windows computer (iTunes 9.0 I think) until my harddrive failed and I lost my entire library. I rebuilt the library (songs) from a year old backup (and various other source for songs) but my playlists and ratings are of course a year old. My iPod itself has most of the playlists and ratings I care most about (favorite songs and albums, rated 4 and 5, for example). I have a catch 22 situation where I feel nervous that I haven't backed up my iPod in around 4 months (when my drive failed) so I'd like to back it up as soon as possible... but if I back it up I have to clear all the songs and playlists and copy them back, which I can't really do since I need to rebuild my playlists on my computer first (using the data only available on my iPod!) The question: is there a better way to READ the information off my iPod than doing it manually, song by song and album by album and playlist by playlist (XML, text dump, database, spreadsheet, anything). In other words, mostly I want the information (metadata like ratings and playlists, not songs) copied off the iPod so I can more quickly get my iTunes library ratings and playlists re-built (manually) so I can finally wipe the music and back up my apps, etc. Then I'd like to copy the music back immediately. The part I'd like to avoid is manually navigating everything on my iPod to read through all the playlists and ratings (50 GB, 6,000+ songs) as I re-enter all of that data by hand. I've done a few dozen albums and it's pretty time consuming having to tap around on the iPod. Reading from a spreadsheet (for example, or XML which I could write a script to get into spreadsheet form) would probably help tremendously, plus then I'd have a backup of that information somewhere besides just my iPod.

    Read the article

  • linux disk usage report inconsistancy after removing file. cpanel inaccurate disk usage report

    - by brando
    relevant software: Red Hat Enterprise Linux Server release 6.3 (Santiago) cpanel installed 11.34.0 (build 7) background and problem: I was getting a disk usage warning (via cpanel) because /var seemed to be filling up on my server. The assumption would be that there was a log file growing too large and filling up the partition. I recently removed a large log file and changed my syslog config to rotate the log files more regularly. I removed something like /var/log/somefile and edited /etc/rsyslog.conf. This is the reason I was suspicious of the disk usage report warning issued by cpanel that I was getting because it didn't seem right. This is what df was reporting for the partitions: $ [/var]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 511M 8.9G 6% / tmpfs 5.9G 0 5.9G 0% /dev/shm /dev/sda1 99M 53M 42M 56% /boot /dev/sda8 883G 384G 455G 46% /home /dev/sdb1 9.9G 151M 9.3G 2% /tmp /dev/sda3 9.9G 7.8G 1.6G 84% /usr /dev/sda5 9.9G 9.3G 108M 99% /var This is what du was reporting for /var mount point: $ [/var]# du -sh 528M . clearly something funky was going on. I had a similar kind of reporting inconsistency in the past and I restarted the server and df reporting seemed to be correct after that. I decided to reboot the server to see if the same thing would happpen. This is what df reports now: $ [~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 511M 8.9G 6% / tmpfs 5.9G 0 5.9G 0% /dev/shm /dev/sda1 99M 53M 42M 56% /boot /dev/sda8 883G 384G 455G 46% /home /dev/sdb1 9.9G 151M 9.3G 2% /tmp /dev/sda3 9.9G 7.8G 1.6G 84% /usr /dev/sda5 9.9G 697M 8.7G 8% /var This looks more like what I'd expect to get. For consistency this is what du reports for /var: $ [/var]# du -sh 638M . question: This is a nuisance. I'm not sure where the disk usage reports issued by cpanel get their info but it clearly isn't correct. How can I avoid this inaccurate reporting in the future? It seems like df reporting wrong disk usage is a strong indicator of the source problem but I'm not sure. Is there a way to 'refresh' the filesystem somehow so that the df report is accurate without restarting the server? Any other ideas for resolving this issue?

    Read the article

  • Local or public NTP servers?

    - by BeeOnRope
    For a relatively large network (thousands of hosts) - what are the arguments for and against running a locally managed (pool of) NTP server(s) (perhaps periodically set via some public NTP server) and having all other hosts on the network use that (pool of) NTP server(s) versus having all hosts simply use public NTP servers directly, say via ntp.pool.org? Aside from the pros and cons, What is typical best practice today?

    Read the article

  • How to add extensions to a lot of files using content of each file?

    - by v8media
    I've got over 10,000 files that don't have extensions from older versions of the Mac OS. They're extremely nested, and they also have all sorts of strange formatting and characters. They don't have file types or creator codes attached to them any longer. A great deal of these files have text in the file that will let me determine extensions (for example Word.Document.8 is in every file created by that version of Word, and Excel.Sheet.8 in every file created with that version of Excel). I found a script that looks like it would work for one of these file types at a time, but it erases parts of filenames after nefarious characters, which is not good. find . -type f -not -name "." -print0 |\ xargs -0 file |\ grep 'Word.Document.8' |\ sed 's/:.*//' |\ xargs -I % echo mv % %.doc So, two questions from that: One is, should I clean the characters in the filenames first, or programmatically deal with those in the script in order to leave them the same? As long as I lose no information from the filenames, I don't see a problem cleaning out slashes and other problem characters. Also, if I clean the filenames, there are likely to be duplicates, so any cleaning script would have to add something like "-1" before the extension to make sure nothing gets lost. 2nd question is how do I change the script so that it will look for more than one file type at the same time and give each the proper extension? I'm not tied to this script, but it is understandable, which is a pro. Mac OS X 10.6 is installed on this file server, but I've got access to any recent versions of OS X. Thanks, Ian

    Read the article

  • MCollective alternative?

    - by WinkyWolly
    I really want to run MCollective on my fleet of servers however there are a large number of untrusted users on each machine which makes using MCollective not ideal in my eyes. I'm aware that there is some things you can do to take precaution but I'm not familiar enough with ActiveMQ / want something that's a bit more mindful of similar environments to mine outside the box. I'm looking for a fact collection like tool essentially. (Tagging under puppet / server since no mcollective tag and I don't have enough reputation to create a new one)

    Read the article

  • How can I detect hard drive failures?

    - by Francis
    I am in charge of a large number of Windows servers. Recently, many have been reporting hard drive errors with event codes 11 and 55. CHKDSK indicates that the drives are fine most of the time. What other diagnostic tools could I use to more accurately detect hard drive failures? Could these Windows events be false positives? I have already evaluated S.M.A.R.T., and it seems to have significant sensitivity and specificity issues.

    Read the article

  • Is Ubuntu a viable replacement of Windows XP for small enterprise environments?

    - by Alex. S.
    Hi all, I'm a newbie systems administrator, so any advice would be great. I would like to setup ubuntu 8.04 lts in a small office of consulting in management (around 50 workstations) instead of Windows XP. I would install MS Office 2007 via WINE (*). It would be a fresh installation, so the migration would be less of a pain. The new setup would also include a small server as document repository and a backup server by now. Later, I would install another goodies like a IM server, a document management solution, and whatnot collaborative tool. What do you advice in this scenario? Do you think is viable? Should I try to convince my managers this is a good idea? I consider myself as a fair experienced user in both systems, and I'm the only guy in charge of everything. I need to cut costs down, and I think that antivirus and antimalware software are a waste of money and time. Is this good idea?, or should I resign and try to lock down the Windows systems and install AV software? Is there anything else in this setup I'm not foreseeing? (*) The only catch in my test machine until now had been that Office SmartArt doesn't work properly, the rest of Office 2007 may seem ok.

    Read the article

  • Managing Internal Yum Repository Groups

    - by elmt
    What is the best method for handling yum groups dependencies? For example, take this comps.xml file <comps> <group> <id>production</id> <name>Production</name> <default>true</default> <description>Packages required to run</description> <uservisible>true</uservisible> <packagelist> <packagereq type="default">ssh</packagereq> </packagelist> </group> <group> <id>development</id> <name>Development</name> <default>false</default> <description>Packages required to develop</description> <uservisible>true</uservisible> <packagelist> <packagereq type="default">gcc</packagereq> </packagelist> </group> </comps> which is packaged with createrepo -g comps.xml x86_64. The ssh and gcc rpms are not installed in the x86_64 directory. If I run yum groupinstall development, yum is smart enough to pull the gcc package from the RHEL repo even though the groups are defined in my internal repository. However, is this the proper way of doing this, or should I copy the rpms to my local repository and recreate the repo?

    Read the article

  • excel / open office - append an incrementing value to all non-unique fields

    - by mheavers
    I have a large table of about 7500 store names. I need to search through those names and, if they are not unique, append an incrementing value, for example: store_1 store_2 etc. Anyone know how to do this? For another project, I was using this: =J1&IF(COUNTIF($J$1:J1,J1)1,COUNTIF($J$1:J1,J1),"") but in open office this gives an error, and in google spreadsheets, it times out because my database is so big. Any suggestions?

    Read the article

  • Using ffmpeg to cut up video

    - by Neil
    I am using ffmpeg like this e.g.: ffmpeg -i input.wmv -ss 60 -t 60 -acodec copy -vcodec copy output.wmv to cut out a section of a large file. The -ss part works fine but the -t is ignored. That is, it correctly removes the first -ss seconds but then just keeps going to the end of the input with the copy. Is there a way to use ffmpeg to cut off the end of a video without recoding it?

    Read the article

  • Is there a way to automatically keep Chrome/Ask Tool Bar from installing?

    - by hydroparadise
    So of lately, I've had to warn my users to watch out for unwanted programs that are coming in with Adobe Flash and Java updates. Adobe seems to be pushing Google's Chrome and Java with the Ask.com Toolbar. I admit that it could be much worse because both instance simply require an uncheck during some point of the update process, but on a large scale, prevention is better than confrontation. Any suggestions?

    Read the article

< Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >