Search Results

Search found 72070 results on 2883 pages for 'custom error pages'.

Page 618/2883 | < Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >

  • Adventures in Windows 8: Solving activation errors

    - by Laurent Bugnion
    Note: I tagged this article with “MVVM” because I got a few support requests for MVVM Light regarding this exact issue. In fact it is a Windows 8 issue and has nothing to do with MVVM Light per se… Sometimes when you work on a Windows 8 app, you will get a very annoying issue when starting the app. In that case, the app doesn’t not even start past the Splash screen. Putting a breakpoint in App.xaml.cs doesn’t help because the app doesn’t even reach that point! So what exactly is happening? Well when a Windows 8 app starts, the system is performing a few check first. One of the checks, for instance, is to see if an app with the same package ID is already available. The package ID is a unique value set in the package manifest. In the Solution Explorer, double click on Package.appxmanifest. This opens the manifest in a special editor Click on the Packaging tab See the GUID under Package Name. This is the unique ID I am talking about. If there is a conflict (i.e. if an app is already installed with the exact same ID), Windows will warm the user that the app is already installed. However when you are in the process of developing an app, you install and uninstall the same app many many times (every time that you start in Visual Studio), and sometimes some issues arise, for instance failing to uninstall the app before starting the new instance of the same app. First step if you get such an error When the application fails to start past the splash screen, the first step is to identify what kind of error happened. In my experience the “already installed” is by far the most frequent (in fact I never had another such error), but it can be something else. An annoying thing is that the popup that shows the error is usually started below the Windows 8 app, and so you don’t even see it! This is especially true if you run this in the Simulator. In that case, do the following: Press on the Simulator’s home button, then press on the Desktop tile on the Start menu. The error popup should be shown on the desktop. If your applications runs on the Local machine, you also do the same and press the Windows button, and then from the Start menu press the Desktop tile. Deployment error in Studio Sometimes the same error causes Visual Studio to fail launching the application at all with a deployment error. This is a better case, because at least it is clear that there is an issue. In that case, write down the code that is shown in the Error window (for instance 0x80073D05 in the example below). Once you have the error code, go to the “Troubleshooting packaging, deployment, and query of Windows Store apps” page and look up the code in question. In my case, the error was “ERROR_DELETING_EXISTING_APPLICATIONDATA_STORE_FAILED”, “An error occurred while deleting the package's previously existing application data.” Solving the “ERROR_DELETING_EXISTING_APPLICATIONDATA_STORE_FAILED” issue Update: Before trying the below, you can also try the simple steps: Exit Visual Studio Go to the Start menu Locate your app’s tile. It should be visible in the Start menu directly, towards the far end on the right. Right click the tile and select Uninstall from the App Bar. Restart Visual Studio and try again. Sometimes it helps. If it doesn’t, then try the following: In order to solve the case where Windows, for any reason, fails to delete the existing application before starting the new instance, follow the steps: Open the Package.appxmanifest in Visual Studio Open the Packaging tab. Change the Package name. For tests you can just try to change the last character of the GUID, though I would recommend creating a brand new GUID. Press Start Type GUID Start the GUID Generator application Select Registry Format Press Copy. Paste the new GUID in place of the Package Name in Visual Studio Important: don’t forget to remove the curly brackets at the beginning and at the end of the newly pasted GUID. Then you just have to cross your fingers and start the application again… If it works, celebrate. if it doesn’t work… well at this point I am not sure so good luck with that ;) Happy coding, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Disable mobile page redirection for SharePoint 2013

    - by Sahil Malik
    SharePoint, WCF and Azure Trainings: more information SharePoint 2013 (foundation too), detects requests from mobile devices and automatically changes the uRL of the requested non mobile page to its mobile substitute. This logic is now built into SPRequestModule. The mobile view is pretty damned amazing. Even though the set of pages for mobile access is completely different, SharePoint has an entirely separate set of controls for the mobile pages. These are in the Microsoft.SharePoint.MobileControls namespace which inherit from Microsoft ASP.NET controls in the System.Web.UI.MobileControls namespace. These Mobile pages can even use mobile Web Part adapters to mimic the behavior of webparts on mobile webpart pages. Read full article ....

    Read the article

  • What are some best practices for minimizing code?

    - by CrystalBlue
    While maintaining the sites our development team has created, we have come across include files and plugins that have proven to be very useful to more then one part of our applications. Most of these modules have come with two different files, a normal source file and a min file. Seeing that the performance and speed of a page can be increased by minimizing the size of the file, we're looking into doing that to our pages as well. The problem that we run into is a lot of our normal pages (written in ASP classic) is a mix of HTML, ASP, Javascript, CSS, and include files. We have some pages that have their JS both in include files and in the page, depending on if the function is only really used in that page or if it's used in many other pages. For example, we have a common.js and an ajax.js file, both are used in a lot of pages, but not all of them. As well as having some functions in a page that doesn't really make sense to put into one master page. What I have seen a few other people do online is use one master JS file and place all of their javascript into that, minify it, gzip it, and only use that on their production server. Again, this would be great, but I don't know if that fully works for our purposes. What I'm looking for is some direction to go with on this. I'm in favor of taking all of our JS and putting it in one include file, and just having it included in every page that is hit. However, not every page we have needs every bit of JS. So would it be worth the compilation and minifying of the files into one master file and include it everywhere, or would it be better to minify all other files and still include them on a need-to-use basis?

    Read the article

  • Why does the BADSIG/"Untrusted sources" error recur forever?

    - by Jeff McMahan
    On at least a dozen occasions, I've spent 2-3 hours figuring out how to get Ubuntu 11.10-12.10 to either update or acquire software from software center, or both. I want to fix whatever is causing the BADSIG problem once and for all; I've wasted so much time trying to get this to work well enough that I can rely on it, but the same problem comes back after a couple weeks of normal updates and software center usage. Don't refer me to a standard posted solution on the web---whatever it is, I've used it more times than you have. The question isn't whether I can get it to work right this afternoon. I can. The question is what is causing the problem to recur regularly across 3 releases. Notice: I use this computer 4-5 hours per week and I do little on it. PDFs, Latex, FireFox, Mendeley, and that's it. I don't constantly install new software, and I don't fiddle with things unnecessarily.

    Read the article

  • Web master tools is throwing out 404 errors on link not on page

    - by plantify
    Webmaster tools is showing thousands of 404 errors, where pages on the site are referring to another incorrect url. For example, URL not found www.plantify.co.uk/shop/=, linked from http://www.plantify.co.uk/shop/gift-voucher and http://www.plantify.co.uk/shop/special-plant-offers. I obviously have checked the source and cannot find any references to this link on any page. The only consistent issue is that it only seems to report this error on pages with two section i.e. www.plantify.co.uk/shop does not report any error whilst all pages with www.plantify.co.uk/shop/xxx (where xxx can be several different pages such as gift-voucher) all report this. I cannot seem to duplicate this error. I have run a link checker (we use Screaming Frog) and it does not report this error. I have fetched these pages as a bot, and these do not report this error. I am at a total loss. I cannot even duplicate the issue, but it is most definitely an issue, as Webmaster Tools is reporting new errors every day. Is this perhaps google bot doing its own thing?

    Read the article

  • I Can't update 12.04 to 12.10 ERROR:"gpg: WARNING: This key is not certified with a trusted signature!"

    - by jason328
    I'm trying to update to 12.10 from a 12.04 machine. When I ran the code sudo do-release-upgrade -d in my terminal and verified my password it checked for a new upgrade. I was then returned with this: gpg: Good signature from "Ubuntu Archive Automatic Signing Key <[email protected]>" gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner.` There of course is more but it looks like it may or may not be showing private information with it. What's causing this and how do I fix this.

    Read the article

  • I need beginner help on loading an image (2D). I have an error

    - by Seth Taddiken
    I keep getting a "NullReferenceExeption was unhandled" with "Object reference not set to an instance of an object." written under it. I have all of the images (png) correct with names and added to references. protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); backGround = Content.Load("Cracked"); player1.playerBlock = Content.Load("square"); player2.playerBlock = Content.Load("square2"); }

    Read the article

  • Is multiple domain names and links from same IP causing poor search engine rankings?

    - by John
    I have an ecommerce website which is not doing so well in Google. I am trying to improve this of course, and am looking at some possibilities for why it isn't doing well. The website has four domain names, all of which have been indexed by Google. A few months ago I applied 301 redirects to any requests for two of the domain names so now it is down to two domain names (one is a .net, the other is a .com.au, the others were .net.au and .com). I prefer to use my main domain name (the .com.au), but one of the names has been around for a long time and has more inbound links. According to a PageRank tool, both are PR2. It is a Classic ASP site and up until recently had a lot of querystring parameters. In the last week or so I added URL rewriting so there is now no parameters for most pages. I don't do 301 redirects from the old URLs but instead I add the META canonical tag indicating the preferred new URL. At the same time I redesigned the site and improved title tags, META descriptions, and H tags but it hasn't been long enough yet for Google to index many of these yet. I also looked at what pages Google has indexed and strangely it has some strange pages in the index, there are a lot of pages which are actual keyword searches (more a bunch of random letters than an actual word). What I mean is that it is as if they had typed in something to search for in my search box - there are no links to pages like this and the only way of getting this is to type something in to the search box). So I added a META robots tag with noindex,nofollow anytime that I render pages like this. Years ago I set up a fake price comparison site which lists all my products and links back to my site. It has a different keyword rich domain name but is on the same server and same IP address. It's a completely different layout but does have the same product categories and product descriptions (although I have stripped formatting out of them so they are not identical except in text). I also have a few blog sites which again are on the same server/IP and all have advertising for the website. My questions are: What should I do with the multiple domains, just use one, or continue with two or more? Should I add 301 redirects, not just the META canonical tag? Any idea about Google indexing my search results page, and did I do the right thing with the META robots tag? Is the fake price comparison site likely to be causing problems? Are all the links to the site from other domain names but the same IP address likely to be causing problems? Thanks for any help. Sorry for so many questions in one.

    Read the article

  • Weird Screen while booting to install, while installing and after the install...and then the "panic occured" error

    - by Klyn
    I've decided to install ubuntu but neither ubuntu or any other linux distro won't even get to the desktop screen or work after getting there. On windows 8, everything is just fine. my new video card works perfectly and I have no problem with anything about it. then when I try to boot from ubuntu with wubi or with usb everything goes like this: 1) Grub screen...no problem at all, colors are just fine everything looks okay 2) and then linux boot screen...weird background color, over the backround there are vertical stripes of red-orange dots. but on the ubuntu logo and text, there are no dots at all! -I mean its shape is perfect- 3) desktop is about the start but * vertical stripes of red colored dots are all over the unity screen*. then when I click on ubuntu's menu, it usually switches to black screen saying something about "panic occured"...and then it restarts or it gives no respond at all. problems started after putting hd 6570 video card on my asus m5a78lm-lx video card which has amd phenom II X4 processor on it. I've searched to find something but there was no similar question that's why I'm almost sure it is kind of unique. again, I'm writing on Windows 8 right now and everything works and looks perfect. so far I've updated bios and anyone knows anything to solve this?

    Read the article

  • how to include tags in permalinks of wordpress

    - by babua
    while using custom structure in my wordpress url , when i am trying to include tags, it shows me , it is showing errors , but when i add category , it reflects in url. i want that the tag gets included in custom url structure automatically , how can it be done using wordpress ... please help ... when i am adding /%tag%/ to custom structure field in wordpress admin , the url shows not found message.

    Read the article

  • Cannot save all of the property settings for this Web Part.

    - by ybbest
    I would like to display all the items of custom content type of Animals in a SharePoint content query WebPart. After choosing the appropriate values in the web part editor and click on Save I got this error: Cannot save all of the property settings for this Web Part. There is a problem with one or more of the field values below. But when I examine all the values below , it does not flag any error information. I finally manage to locate the error flag after I expand the presentation section. I then delete the text in the Link textbox , now I can save the settings. However , I think the error message should have been more specific so that users can quickly locate the error. The worst part for this is that I did not even change anything in the presentation section, I merely configure the Source in the Query section. Well, I guess I am still new to SharePoint, I just have to get used to these generic error message ):

    Read the article

  • Canonical tags for separate mobile URLs

    - by DnBase
    I have a Drupal website serving mobile pages from different urls (starting from /mobile). According to Google recommendations I should use the canonical tag to map desktop and mobile pages. Right now I did this in case I serve the same node (e.g: node/123 and mobile/node/123) but should I do this for other pages as well that are equivalent but share a different content? For example do I need to map the desktop and mobile homepages even if they don't have the same content at all?

    Read the article

  • how does private sales ecommerce site work on their SEO?

    - by 142857
    In a private sales ecommerce site, users need to sign up/in before they can access the pages of website. So, even if a user tries to directly navigate to a product page, he is redirected to sign in. I am wondering then how does these sites manage their SEO, as it would imply google too can't crawl these pages, or do they completely ignore the SEO benefit of allowing google to crawl the product and catalogue pages?

    Read the article

  • OpenXML error “file is corrupt and cannot be opened.”

    - by nmgomes
    From time to time I ear some people saying their new web application supports data export to Excel format. So far so good … but they don’t tell the all story … in fact almost all the times what is happening is they are exporting data to a Comma-Separated file or simply exporting GridView rendered HTML to an xls file. Ok … it works but it’s not something I would be proud of. So … yesterday I decided to take a look at the Office Open XML File Formats Specification (Microsoft Office 2007+ format) based on well-known technologies: ZIP and XML. I start by installing Open XML SDK 2.0 for Microsoft Office and playing with some samples. Then I decided to try it on a more complex web application and the “file is corrupt and cannot be opened.” message start happening. Google show us that many people suffer from the same and it seems there are many reasons that can trigger this message. Some are related to the process itself, others with encodings or even styling. Well, none solved my problem and I had to dig … well not that much, I simply change the output file extension to zip and extract the zip content. Then I did the same to the output file from my first sample, compare both zip contents with SourceGear DiffMerge and found that my problem was Culture related. Yes, my complex application sets the Thread.CurrentThread.CurrentCulture  to a non-English culture. For sample purposes I was simply using the ToString method to convert numbers and dates to a string representation but forgot that XML is culture invariant and thus using a decimal separator other than “.” will result in a deserialization problem. I solve the “file is corrupt and cannot be opened.” by using Convert.ToString(object, CultureInfo.InvariantCulture) method instead of the ToString method. Hope this can help someone.

    Read the article

  • Duplicate page content and the Google index

    - by Kit Sunde
    I have a static pages with dynamically expanding content that google is indexing. I also have deep links into virtually duplicate pages which will pre-expand the relevant section of content into the relevant section. It seems like Google is ignoring all my specialized pages and not putting them in the index. Even after going through web-masters tools, crawling and submitting them to the index manually. I also use the google API for integrating search on the site, and the deep linked pages won't show up. Is there a good solution for this?

    Read the article

  • How to run WordPress and Java web app running on Tomcat on the same server?

    - by Chantz
    I have to run a WordPress site served via Apache2 & Java-based webapp using Tomcat on the same server. When users come to example.com or example.com/public-pages they need to served from WordPress but when they come to example.com/private-pages they need to be served from the Tomcat. I have asked this question on serverfault where they suggested using different port, different IP & sub-domain. I want to go for different port solution since it will mean I need to buy only one SSL certificate. I tried doing the reverse proxy method by having the following in my default-ssl.conf <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName localhost:443 DocumentRoot /var/www <Directory /var/www> #For Wordpress Options FollowSymLinks AllowOverride All </Directory> <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass /private-pages ajp://localhost:8009/ ProxyPassReverse /private-pages ajp://localhost:8009/ SSLEngine on SSLProxyEngine On SSLCertificateFile /etc/apache2/ssl/apache.crt SSLCertificateKeyFile /etc/apache2/ssl/apache.key </VirtualHost> As you have noticed I am using mod_proxy_ajp in Apache2 for this. And that my Tomcat is listening to port 8009 and then serving content. So now when I go to example.com/private-pages I am seeing the content from my Tomcat. But 2 issues are happening. All my static resources are getting 404-ed, so none of my images, CSS, js are getting loaded. I see that the browser is requesting for the resources using URL example.com/css/* This will clearly not work because it translates to example.com:80/css/* instead of example.com:8009/css/* & there are no such resources in the WordPress directory. If I go to example.com/private-pages/abcd I am somehow kicked to the WordPress site (which obviously displays a 404 page). I can understand why #1 is happening but have no clue why the #2 is happening. Regardless, if there is another clean solution for resolving this, I would appreciate y'alls help.

    Read the article

  • Getting error - This application has failed to start because the application configuration is incorrect. Reinsta

    - by Kaye
    I am sorry if I am repeating this issue. But I am a layman and cant understand the answers posted. I want to do away with Windows XP Professional (I have lost the product key) that I have on an old desktop PC that is more than 7/8 years old. I didnt use the PC for a couple of years as this laptop was more convenient. Now I have started to use this PC but cannot download SP2 onto the PC as it keeps asking for product key and I dont remember where I wrote it down or where I kept it. So I downloaded Ubuntu 12.10 onto the desktop screen (about 753MB and 3 hours) and then used Unetbootin to put this on my pendrive. After completing the process, I have on my pendrive 9 folders, 3 text files, 1 autorun file, 1 ldlinux file, 1 wubi application file and 5 other files - total 20 items. Though on bootup I opted for USB boot, the boot-up opens into Windows. It simply bypasses the USB where ubuntu lies. What do I do? Am I doing things correctly. Pls explain in simple terms.

    Read the article

  • Error "Media change: please insert disc labeled..." when installing an application

    - by Stbn
    I've tried several times to install radio tray on my fresh ubuntu 11.10 installation but when I run ~$ sudo apt-get install radiotraythe following dialog pops up at the end of the script: Media change: please insert the disc labeled 'Ubuntu 11.10 _Oneiric Ocelot_ - Release i386 (20111012)' in the drive '/cdrom/' and press enter I've performed previous installations of this app on another computer using the same repository and all went smooth. My Ubuntu Oneiric version was installed from a USB flash drive, so don't understand that request from apt-get.

    Read the article

  • SEO - Coaching Newbies

    All search engines use algorithms and each search engines have different ones. An algorithm is the formula that the search engine uses to evaluate your web pages. The robots will crawl all pages on your site but not all pages will be indexed.

    Read the article

< Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >