Search Results

Search found 30046 results on 1202 pages for 'document load'.

Page 707/1202 | < Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >

  • Returning Shapes to Default in Excel

    - by Craig
    Hi, I have around 70 shapes in a planning document i use for work, everything is fine but i am trying to add a new feature. These shapes are changed using edit points each week to show up on a map, but sometimes shape "A" may not get used in which i just want to turn it back to a default size along with all the other shapes. Does anyone know how i could achieve this via a Macro, i have tried lots of things and searched everywhere but i am at my wits end... If a shape is not default, set all non default shapes to default size. Thanks in advance

    Read the article

  • Intermittent HTTP 401 errors

    - by forthrin
    I am using an Intranet solution which requires basic HTTP login. However, there is an intermittent error which requires me to log in again, and then the server says "Forbidden" whether I give the correct login information or not. To add insult to injury, Safari (and Chrome) seems to show the login dialog for every included resource in the HTML, and it's impossible to cancel this modal dialog sequence, so the whole browser is blocked until I've pressed Esc some 30 odd times. After an hour, I may gain access again, without having really done anything. My questions: What could cause temporal 401 errors? Why do the browsers show the login dialog 30 times per page load (assumedly for every included resource in the HTML from the same domain)?

    Read the article

  • Converting massive images to PDF, without crashing applications

    - by BloodyIron
    I'm trying to work with a large-format scanner, and we are scanning very long documents. Example, one of our documents we cut into two pieces, and one of those pieces is 3633x82486 in resolution. My application, Scanning Master 21+, which comes with the device (Graphtec CSX300-09) can output PDF, however when I try to save to PDF it complains about file being too large. I can successfully output to BMP however. GIMP can even open this BMP, after taking a while to load it. The resulting files range from 200MB - 1.2GB in size. Acrobat refuses to open the BMP format, saying it isn't supported or is damaged (which I know is not true). As I mentioned, the PDF plugin for GIMP crashes when I try to export to PDF. I'm really not sure what is the best tool for this job. So what is the best tool to produce PDF documents of very large images?

    Read the article

  • Wireless only connects to Google

    - by dmanxiii
    I am using a netgear wireless router to connect to create a wireless network. When connected to the router via an Ethernet cable everything works correctly. When connecting to the router via wireless only Google can be accessed. I've done searches via Google using random words to make sure that it is not a cached page, and these all work. However, trying to connect to any non Google domain the page fails to load. I've accessed the connection with two different laptops, both are running Windows Vista. The network connection is strong on both, but the issue persists for both. Edit: I have configured the wireless networks name and password, and connect to that using the credentials I have defined. I am sure I am not connecting to a neighbors wireless network. Edit2: Typing in the direct IP address for CNN.com ( http://157.166.255.19 ) would not resolve.

    Read the article

  • htaccess and htpasswd trouble

    - by hjpotter92
    This is the first time that I have ever tried working with .htpasswd and .htaccess files, so please point out my childish works. I have my apache document root set to /www/ on my debian server. Inside it, there's a folder named Logs/ which I want to restrict access using a htpasswd. I created my htpasswd file using the shell's htpasswd command. And this is the result: > user:<encoded password here> > hjp:<encoded password here> > hjpotter92:<encoded password here> I put this file named .htaccess inside /www/. The Logs/ has following htaccess file in it: > AuthName "Restricted Area" > AuthType Basic > AuthUserFile /www/.htpasswd > AuthGroupFile /dev/null > require valid-user This was again created using an online tool(I forgot its name/link, and can't search the browser-history now). The problem, as it might've already struck you is that I am experiencing no change on my Logs folder access. The folder is still accessible to everyone. I am running apache as root user(if that matters/helps). Please help/guide me. I've tried reading some htaccess guides and have followed some of older SO questions, but still haven't figured out a way to restrict access to Logs folder with a password.

    Read the article

  • How do I recover an accidentaly closed Opera window?

    - by Kostas
    Hello there and thanks for all the help! I accidentaly closed a window with multiple tabs in Opera. There was another window with a couple of tabs running. I have closed and restarted Opera in the hope it will retrieve the windows at startup (it usually asks if I want to continue from last time). Is there a way I can retrieve my closed window with all the tabs? I cannot see them in the history either, probably because When I switched-on my PC the Internet connection was down and the pages didn't load :-/

    Read the article

  • Incomplete Ubuntu 12.04 install dual-boot XP

    - by Mike
    This weekend has been the 1st time I've tried to install Ubuntu. On the initial install, (I am using a USB) the installation went all the way through and asked to restart when completed. I was not able to get grub to boot and kept going through Windows. After some research I found some articles on updating/reinstalling grub, so I followed those. I finally got grub to load after a day but there was no Windows option only the Ubuntu 12.04 which when I selected it only gave me a fatal error 17. I booted from the USB again and deleted the partitions and installed again. This time I got an error 15. I then booted through XP and downloaded the WUBI.exe and uninstalled Ubuntu and reinstalled again. The installation went to the very end and then gave an error message (which I don't remember exactly what it said) something along the lines of checking my logs on my C drive. I then uninstalled Ubuntu and removed the wubi.exe file and wiped my USB and did the download to the USB again. Booted through USB and ran the install process again. It again went through the install process but after creating username and password and hitting continue, the installation dialogue box disappears and the mouse spinning wheel is displayed, but I do not receive the prompt to restart. I can still access the side menu for Ubuntu but the wheel keeps spinning. How do I get Ubuntu to install properly?

    Read the article

  • what can be reasons for localhost responding super-slow the first time a page is requested?

    - by frequent
    Still learning my server-ways with Apache2.2/MySQL5.2/Coldfusion8 on localhost (running Windows XP) What I notice is every time I request a page for the first time after firing up Coldfusion and Apache, localhost needs forever (1+ minute to respond and send the inital page). After that all seems to run at normal loading time. I'm using require.js to pull in Jquery, Jquery Mobile and two other plugins on first page load, but loading the same page from a real server works normally, so I'm also ruling out this as a probable cause. Since it happens regardless of which page I'm loading first it shoud not be page related, so I'm looking for other clues on why this could happen. Thanks for some thought!

    Read the article

  • Mitigating the 'firesheep' attack at the network layer?

    - by pobk
    What are the sysadmin's thoughts on mitigating the 'firesheep' attack for servers they manage? Firesheep is a new firefox extension that allows anyone who installs it to sidejack session it can discover. It does it's discovery by sniffing packets on the network and looking for session cookies from known sites. It is relatively easy to write plugins for the extension to listen for cookies from additional sites. From a systems/network perspective, we've discussed the possibility of encrypting the whole site, but this introduces additional load on servers and screws with site-indexing, assets and general performance. One option we've investigated is to use our firewalls to do SSL Offload, but as I mentioned earlier, this would require all of the site to be encrypted. What's the general thoughts on protecting against this attack vector? I've asked a similar question on StackOverflow, however, it would be interesting to see what the systems engineers thought.

    Read the article

  • My home partition slowly fills up until the system is unable to complete even simple tasks

    - by user973810
    So I have an awesome configuration on my home pc. My /home directory has its own partition. My home partition slowly fills up until the system is unable to complete even simple tasks. For example, when this issue has occured, I can load up firefox. It just pops up an error message saying that it cannot be done. Rebooting solves the issue. The strange thing is, I've run baobab and it doesn't notice a problem. There should be hundreds of gigabytes of data somewhere, but it doesn't see it. Does anyone have any idea of how I might troubleshoot this issue? I'm thinking I could do lsof but I've always found the output of that to be too much information. Maybe my drive is, like, dying. Edit: is there a /home analog of /var that gets cleared out at boot time? Maybe I could check in there next time I notice this problem to see if I can divine what's up. Update: I found the issue. my .xsession-errors file is filling up with Authentication deferred - ignoring client message Is there a way I can see what is causing this error and fix it?

    Read the article

  • automatically change the gnome-terminal "title" for the window

    - by tom
    Hi. Trying to change the title of a current gnome-terminal (similar to the "set title" that you can do manually") The system is running Fedora 9. The HowTo Xterm-Title discusses how to set the prompt, for an xterm. Tried to implement the escape sequences with no luck. (might be something weird..) Tried to use the gconftool to dump/change/load the changed conf attributes, and again, no luck. Also, set the PROMPT_COMMAND just in case the prompt command was somehow changing the title back (which is highly doubtful) Searching the 'net indicates that a few people have tried to solve this with no luck... I'd also like to figure out how to create a new gnome-terminal with a unique specified title... once this is solved, i'l gladly create a quick writeup/post onn how to accomplish this for others... thanks

    Read the article

  • Bad HD video deinterlacing processing

    - by Guy Fawkes
    I have Ubuntu 12.04 32-bit with Unity. My system configuration is: CPU: Core 2 Quad Q6600 (2.4 GHz) RAM: 8192 Mb DDR2 Kingston Video: Palit GeForce GTX 260 216 SP, and my screen resolution is 1680x1050. I also have Window 7 Ulitimate installed, and I can see the same files in Media Player Classic without any horizontal lines. I've installed vdpau driver, NVIDIA drivers 304.51, and MPlayer 2 (within SMPlayer). I've disabled "Sync to VBlank" option in CCSM (because in other way, by default, MPlayer process use about 50-60 percents of my processor load), tried to swich between different deinterlace options in SMPlayer, used "-vc ffh264vdpau,ffmpeg12vdpau" (without quotes) parameters for MPlayer, switched to "Ubuntu 2D", but, finally, have no results. Any suggestions? How must I to set up MPlayer?

    Read the article

  • Proxy service likes Apache Http

    - by Aptos
    Currently I try to simulate my app as distributed servers, so I let them run on localhost:9000 and localhost:9001, i tried using apache load balancer but it is really hard to config on mac, my idea is the second server localhost:9001 will be kept ideal and the requests only be redirected to them when the first server is downed. Is there any good free program can do that ? (except Apache httpd). Extra functions: my application is written in java and maintain an in-memory object, is there any service that can synchronize that object between 2 servers so they can keep uptodate status of other (the second one takes state of the first one)? Is there any app can support that? Thank you very much.

    Read the article

  • What is the average page size for single page application (SPA)? [on hold]

    - by Emmanuel Istace
    I'm developing a single page application with a lot of css & javascript. For now the page is 1.3Mo composed by 5 section. Here are the rounded stats : Document : 10kb Style : 60kb Images : 450 kb (already compressed, include a big gallery thumbnails) Javascript : 700kb - 600kb of "framework" (jquery, jquery-ui, boostrap, modernizer, waypoint, ...) and 100kb of custom js. Fonts : 125kb And the site is not finished yet. (Will include gmap api, and some others...) My questions are : Do you have any statistics about the average weight of an SPA? As this is the whole website, do you think it's acceptable? Is lazy load (for images) a solution? What will be impact for SEO ? Is the "200kb rule" of google still relevant? Do you know great tools to detect which javascript code is not used during the the exection of a page and then the availability to optimize these 700kb of framework js stuffs? Can a caching strategy be an answer?

    Read the article

  • Can I use a 302 redirect to serve up static content from an URL with escaped_fragment?

    - by Starfs
    We would like to serve up SEO-friendly Ajax-driven content. We are following this documentation. Has anyone ever tried to write a 302 redirect into the .htaccess file, that takes the ?_escaped_fragment= string and send that to a static page?, for example /snapshot/yourfilename/. How will Google react to this? I've gone through the documentation and it's not very clear. The below quote is from Google's documentation this is what I find. I'm not sure if they are saying that you can redirect the _escaped_fragment_ URL to a different static page, or if this is to redirect the hashtag URL to static content? Thoughts? From Google's site: Question: Can I use redirects to point the crawler at my static content? Redirects are okay to use, as long as they eventually get you to a page that's equivalent to what the user would see on the #! version of the page. This may be more convenient for some webmasters than serving up the content directly. If you choose this approach, please keep the following in mind: Compared to serving the content directly, using redirects will result in extra traffic because the crawler has to follow redirects to get the content. This will result in a somewhat higher number of fetches/second in crawl activity. Note that if you use a permanent (301) redirect, the url shown in our search results will typically be the target of the redirect, whereas if a temporary (302) redirect is used, we'll typically show the #! url in search results. Depending on how your site is set up, showing #! may produce a better user experience, because the user will be taken straight into the AJAX experience from the Google search results page. Clicking on a static page will take them to the static content, and they may experience avoidable extra page load time if the site later wants to switch them to the AJAX experience.

    Read the article

  • putty ssh client become not responding frequently in windows 7

    - by Sankaran
    with celeron processor and 512 MB ram windows 7 takes too much time even to open a folder in windows explorer. I connect to internet with GPRS modem. When I tried to connect with remote linux machine with putty ssh client. The problem is the putty becomes Not responding in 2 or 3 minutes. I've to connect again and putty goes Not responding again. When i tried in Safe mode with networking, GPRS modem is not even detected. the modem is USB data card. Is it possible load USB drive in safe mode of windows 7 or any other possibility to connect with remote machine via ssh

    Read the article

  • What is a proper MySql replication configuration for frequent db updates and rare selects?

    - by serg555
    We currently have 1 master db on its own server and slave db on app server. App executes very frequent but light updates (like increasing counters), and occasional (once in a few minutes) heavy selects (which is the most important part of the app). When app was connected only to master db there were no performance issues. With slave db introduction CPU load avg on app server increased to about 6-10 during that heavy select period (from 3-4 as before). When server doesn't run those frequent updates it seems like performance for selects stays within the limits. So I have a feeling that those updates is what is causing the performance drop (also these frequent updates are not critical so if slave db doesn't have them in sync with master for some time it would be ok). What would be a good db replication setup for such kind of app? What are the replication parameters we could tweak? Thanks.

    Read the article

  • SEO: Getting site to show in location-specific searches

    - by willvv
    I'm really new to this SEO world and I've been reading a lot to try and figure it out. We have a site moodbond.com that allows users to browse/create events anywhere. And we fill it with content from the main cities in the US. We would like it to show for searches for things like "events in san francisco" or "what to do in new york", however, since the site is not really location-specific, I'm not really sure where to begin. I've been thinking a couple of things, maybe you can help me decide if these would be a good way to start or if I should try something different. 1- Allow something like location-specific urls (e.g. moodbond.com/browse/san-francisco) could just show the main page centered in San Francisco. 2- Change the headers/title of the page so it adapts automatically to the city being browsed (and change this dynamically as the user changes the location of the map). 3- Add internal links to different locations (e.g. add a link at the footer of the page that says "Events in Seattle" that makes the site load events in that city. (this would probably depend on implementing #1). What do you guys think? will any of these really help or should I look for a different approach? any advice is welcome. Thanks

    Read the article

  • PeopleSoft CRM 9.2 Release Value Proposition

    - by Race Bannon
    Oracle's PeopleSoft Customer Relationship Management (CRM) delivers solutions that have been tailored to fit your industry business processes, your customer strategies, and your success criteria. With PeopleSoft CRM 9.2, organizations will be able to deploy a solution that delivers built-in best practices specific to your industry with a highly configurable, tightly integrated platform, ensuring that solutions will be fast to implement. The result is less configuration, less customization, and less integration. PeopleSoft Customer Relationship Management (CRM) is a world-class solution for organizations of every size and Oracle’s planned product roadmap for PeopleSoft applications is to deliver valuable, needed features for all of an organization’s constituents along three design principles — Simplicity, Productivity, and Lowered Total Cost of Ownership — as well as new application functionality as prioritized by our customers. The upcoming 9.2 release of PeopleSoft Customer Relationship Management focuses on these themes of Simplicity, Productivity, and Lower Total Cost of Ownership while also delivering robust new functionality to help your organization succeed. The recently published PeopleSoft CRM 9.2 Release Value Proposition provides overviews of the new features and enhancements planned for these applications for Release 9.2. This document offers customers a road map intended to help them assess the business benefits of upgrading to the 9.2 release while also helping them plan their IT projects and investments. (Link is to a My Oracle Support page, available to customers and partners.) Oracle continues to deliver enterprise-wide features that enhance our customer ownership experience and helps them run their businesses more efficiently and profitably. With the CRM 9.2 release, we continue to abide by this firm commitment we’ve made to our customers.

    Read the article

  • Reverse proxy with SSL and IP passthrough?

    - by Paul
    Turns out that the IP of a much-needed new website is blocked from inside our organization's network for reasons that will take weeks to fix. In the meantime, could we set up a reverse proxy on an Internet-based server which will forward SSL traffic and perhaps client IPs to the external site? Load will be light. No need to terminate SSL on the proxy. We may be able to poison DNS so original URL can work. How do I learn if I need URL rewriting? Squid/apache/nginx/something else? Setup would be fastest on Win 2000, but other OSes are OK if that would help. Simple and quick are good since it's a temporary solution. Thanks for your thoughts!

    Read the article

  • Printer Canon MP540 succesfully added finally but doesn't print? (Attached Debug log)

    - by NES
    i tried to setup my printer in Ubuntu 10.10. I had to use special guide to install because Canon used libcupsys2 in its packages and ubuntu expects libscupsys i followed this guide in reference to this advice. The first problem was this one (the ubuntu "add printer dialog asked for root credentials". The suggested workarout in Launchpad to create a root password worked. Now i added the printer. It's available for printing. Then i got an error message "cups insecure filter" which prevented me from printing. That could be solved by setting the need root rights in the /usr/lib/cups/filter/ directory. The error message disappeared after restarting cups service. Now it should work but it doesn't. The main problem is now, the printer seems to be proper setup but when i try to print a document, the printer icon appears for short time in gnomepanel. There's a printing job in Queue which got completed, but the printer doesn't print. I attached the Debuglog provided by printer error control, had to upload to another site, since it was to big in the question body here. Perhaps someone can identify the problem with it? Note: i know that it once worked fine with an older release of Ubuntu, but not sure which version this was.

    Read the article

  • Component MSINET.OCX or one of its dependencies not correctly registered: a file is missing or invalid

    - by tintincute
    Hi can someone please help me. I'm using win7 64Bit and I'm trying to download this program "Croque-Mort" But when I do, I'm getting an error: "Component MSINET.OCX or one of its dependencies not correctly registered: a file is missing or invalid" I tried to check it by opening my cmd.exe and run as administrator and then type : regsvr32 msinet.ocx Then I got an error: "The module "MSINET.OCX" failed to load. Make sure the binary is sorted at the specified path or debug it to check for problems with the binary or depenedent .DLL files. The specified module coudl not be found" Is there a way to install this component? Would appreciate your help

    Read the article

  • Improve efficiency when using parallel to read from compressed stream

    - by Yoga
    Is another question extended from the previous one [1] I have a compressed file and stream them to feed into a python program, e.g. bzcat data.bz2 | parallel --no-notice -j16 --pipe python parse.py > result.txt The parse.py can read from stdin continusuoly and print to stdout My ec2 instance is 16 cores but from the top command it is showing 3 to 4 load average only. From the ps, I am seeing a lot of stuffs like.. sh -c 'dd bs=1 count=1 of=/tmp/7D_YxccfY7.chr 2>/dev/null'; I know I can improve using the -a in.txtto improve performance, but with my case I am streaming from bz2 (I cannot exact it since I don't have enought disk space) How to improve the efficiency for my case? [1] Gnu parallel not utilizing all the CPU

    Read the article

  • 301 redirect from HTTP to HTTPS - how to be sure Google is fetching the correct information?

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have one site where we implemented a 301 redirect on the homepage from HTTP to HTTPS. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our Webmaster Tools account I notice that we are not being provided with any webmaster information (e.g., search queries, backlinks, etc...) related to our homepage under SSL. I performed a Fetch as Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried by the fact that Google fetch is not getting the correct Title tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the sitemap to ensure that Google is correctly indexing all our pages and being able to flow from the HTTPS to the HTTP without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • High CPU Steal percentage on Amazon EC2 Instance

    - by Aditya Patawari
    I am experiencing high CPU steal percentage in a Amazon EC2 large instance. I know it means that my virtual CPU is waiting on the real CPU of the machine for time. My question is that what can I do to reduce this percentage and get maximum out of the CPU? Steal percentage is consistently at 20%. System load crosses 10 when this happens. I have checked memory and network and I am sure that they are not the bottleneck. Is that normal for such environment? Also are there any system level optimization techniques for reducing steal percentage form the virtual instance? avg-cpu: %user %nice %system %iowait %steal %idle 52.38 0.00 8.23 0.00 21.21 18.18

    Read the article

< Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >