Search Results

Search found 80190 results on 3208 pages for 'web site project'.

Page 181/3208 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • google calendar api (java) authentication error in dynamic web project

    - by HazProblem
    org.springframework.web.util.NestedServletException: Handler processing failed; nested exception is java.lang.NoClassDefFoundError: com/google/gdata/util/AuthenticationException org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:823) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:719) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:644) org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:560) javax.servlet.http.HttpServlet.service(HttpServlet.java:641) javax.servlet.http.HttpServlet.service(HttpServlet.java:722) org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:88) org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) The class i have written works fine as a normal java application, but when i try to use the code in an dynamic web project i get this authentication failure. Where´s the difference?

    Read the article

  • Invoke web page from Linux C

    - by umetzu
    Hi, i need to get all the HTML TEXT from a url "http://localhost/index.html" to a String variable on C I know that if i put on telnet - telnet www.google.com 80 Get webpage.... it returns all the html. How i can do it? im on linux enviroment? with C (NOT C++). BTW im .net programmer :/

    Read the article

  • How can I [simply] consume JSON Data in a Line of Business Web Application

    - by Atomiton
    I usually use JSON with jQuery to just return a string with html. However, I want to start to use Javascript objects in my code. What's the simplest way to get started using json objects on my page? Here's a sample Ajax call ( after $(document).ready( { ... }) of course: $('#btn').click(function(event) { event.preventDefault(); var out = $('#result'); $.ajax({ url: "CustomerServices.asmx/GetCustomersByInvoiceCount", success: function(msg) { // // Iterate through the json results and spit them out to a page? // }, data: "{ 'invoiceCount' : 100 }" }); }); My WebMethod: [WebMethod(Description="Gets customers with more than n invoices")] public List<Customer> GetCustomersByInvoiceCount(int? invoiceCount) { using (dbDataContext db = new dbDataContext()) { return db.Customers.Where(c => c.InvoiceCount >= invoiceCount); } } What gets returned: {"d":[{"__type":"Customer","Account":"1116317","Name":"SOME COMPANY","Address":"UNit 1 , 392 JOHN ST. ","LastTransaction":"\/Date(1268294400000)\/","HighestBalance":13922.34},{"__type":"Customer","Account":"1116318","Name":"ANOTHER COMPANY","Address":"UNIT #345 , 392 JOHN ST. ","LastTransaction":"\/Date(1265097600000)\/","HighestBalance":549.42}]} What I'd LIKE to know, is what are people generally doing with this returned json? Do you iterate through the properties and create an html table on the fly? Is there way to "bind" JSON data using a javascript version of reflection ( something like the .Net GridView Control ) Do you throw this returned data into a Javascript Object and then do something with it? An example of what I want to achieve is to have an plain ol' html page ( on a mobile device )with a list of a Salesperson's Customers. When one of those customers are clicked, the customer id gets sent to a webservice which retrieves the customer details that are relevant to a sales person. I know the SO talent pool is quite deep so I figured you all here would be able to guide in the right direction and give me a few ideas on the best way to approach this.

    Read the article

  • Cross-domain structure of the site

    - by Coreal
    I have a web site on which I wish to enable instant messaging via IRC. Now I am about to buy VPS to host the ircd on. So, I will have two servers: one is for the web site and the other is for the ircd. The question is how to use the web site's domain name for the ircd?

    Read the article

  • best way to make this app/website combo?

    - by mharris7190
    I have an idea for an app that lets different bars upload drink specials via a webpage to that app. They would log in the website, and the website would prompt them with a box to input the drink specials. When they are done, the app pulls the information and creates a card for that bar. The app would launch into a week view where you select the day you want to see specials for. After the day is selected, it brings you to a scroll view with different bars' cards laid out vertically, each taking up the width of the screen, allowing the user to scroll through the deals for that day of the week. How should I go about doing this if I have very little programming experience? Is there any convention in doing an app like this? Can anyone suggest any reading material that would help? Thank you!

    Read the article

  • Hash passwords before transmitting? (web)

    - by wag2639
    I was reading this Ars article on password security and it mentioned there are sites that "hash the password before transmitting"? Now, assuming this isn't using an SSL connection (HTTPS), a. is this actually secure and b. if it is how would you do this in a secure manor? Edit 1: (some thoughts based on first few answers) c. If you do hash the password before transmission, how do you use that if you only store a salted hash version of the password in your user credentials databas? d. Just to check, if you are using a HTTPS secured connection, is any of this necessary?

    Read the article

  • image doesnt always render on web page

    - by zsharp
    One of my png files does not always get diplayed in the browser (both in firefox and IE). In firebug the image is visible. sometimes ill even see the image start loading and halfway it fizzles. what could this be.? the image is appx 10kb.

    Read the article

  • Acessing elements of this xml

    - by csU
    <wsdl:definitions targetNamespace="http://www.webserviceX.NET/"> <wsdl:types> <s:schema elementFormDefault="qualified" targetNamespace="http://www.webserviceX.NET/"> <s:element name="ConversionRate"> <s:complexType> <s:sequence> <s:element minOccurs="1" maxOccurs="1" name="FromCurrency" type="tns:Currency"/> <s:element minOccurs="1" maxOccurs="1" name="ToCurrency" type="tns:Currency"/> </s:sequence> </s:complexType> </s:element> <s:simpleType name="Currency"> <s:restriction base="s:string"> <s:enumeration value="AFA"/> <s:enumeration value="ALL"/> <s:enumeration value="DZD"/> <s:enumeration value="ARS"/> i am trying to get at all of the elements in enumeration but cant seem to get it right. This is homework so please no full solutions, just guidance if possible. $feed = simplexml_load_file('http://www.webservicex.net/CurrencyConvertor.asmx?WSDL'); foreach($feed->simpleType as $val){ $ns s = $val->children('http://www.webserviceX.NET/'); echo $ns_s -> enumeration; } what am i doing wrong? thanks

    Read the article

  • IE8 disappearing image bug

    - by Joe Fletcher
    In IE8 (& maybe others), when I leave my page to go to another tab in IE and then come back to my page's tab, each time the cursor runs over an image it disappears until I refresh the page. I've heard of disappearing image bugs, but I couldn't find anything on this particular case, especially given this isn't a weird pre-IE8 bug. I am using a lightbox, so possibly something to do with javascript? http://dev.bwmsnow.co.nz/snowboarding

    Read the article

  • Does .live() binding work for jQuery in IE7?

    - by Steve
    Hi everyone, I have a piece of javascript which is supposed to latch onto a form which gets introduced via XHR. It looks something like: $(document).ready(function() { $('#myform').live('submit', function() { $(foo).appendTo('#myform'); $(this).ajaxSubmit(function() { alert("HelloWorld"); }); return false; }); }); This happens to work in FF3, but not in IE7. Any idea what the problem is?

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

  • VSDB to SSDT Part 2 : SQL Server 2008 Server Project &hellip; with SSDT

    - by Etienne Giust
    With Visual Studio 2012 and the use of SSDT technology, there is only one type of database project : SQL Server Database Project. With Visual Studio 2010, we used to have SQL Server 2008 Server Project which we used to define server-level objects, mostly logins and linked servers. A convenient wizard allowed for creation of this type of projects. It does not exists anymore. Here is how to create an equivalent of the SQL Server 2008 Server Project  with Visual Studio 2012: Create a new SQL Server Database Project : it will be created empty Create a new SQL Schema Compare ( SQL menu item > Schema Compare > New Schema Comparison ) As a source, select any database on the SQL server you want to mimic Set the target to be your newly Database Project In the Schema Compare options (cog-like icon), Object Types pane, set the options as below. You might want to tweak those and select only the object types you want. Then, run the comparison, review and select your changes and apply them to the project.

    Read the article

  • Major Google not follow increase since introducing 301 to site

    - by jakob
    Recently we implemented Varnish in front of our web nodes so that the backend would get some rest from time to time. Since varnish is case sensitive and our app was not we implemented a 301 in varnish to redirect to small case. Example: You search for PlumBer StockHOLM you will get a 301 redirect to plumber stockholm and then plumber stockholm will be cached. This worked as a charm, but when checking the Google webmaster tools we suddenly got a crazy amount of Status - Not able to follow errors. As you can see in the image below: This of course stirred up some panic and I started to read up on the documentation once again. If I pressed on one of the links I got to the help section where i found this: Well this is strange, but as the day progressed more and more errors were thrown by Google. We took the decision to make varnish return 200 instead of the 301. Now when testing the links that appears in the Not able to follow section I get a 200 back. I have tested with Chrome, curl and lynx reader and everything looks ok but the amount of errors are still increasing. What is a little bit comforting is that the links that appears in the Not able to follow section are dated before the 200 change in varnish. Why do I get these errors and why do they keep increasing? Did google release something new on October 31? Maybe I do not understand the docs correctly?

    Read the article

  • Managing JS and CSS for a static HTML web application

    - by Josh Kelley
    I'm working on a smallish web application that uses a little bit of static HTML and relies on JavaScript to load the application data as JSON and dynamically create the web page elements from that. First question: Is this a fundamentally bad idea? I'm unclear on how many web sites and web applications completely dispense with server-side generation of HTML. (There are obvious disadvantages of JS-only web apps in the areas of graceful degradation / progressive enhancement and being search engine friendly, but I don't believe that these are an issue for this particular app.) Second question: What's the best way to manage the static HTML, JS, and CSS? For my "development build," I'd like non-minified third-party code, multiple JS and CSS files for easier organization, etc. For the "release build," everything should be minified, concatenated together, etc. If I was doing server-side generation of HTML, it'd be easy to have my web framework generate different development versus release HTML that includes multiple verbose versus concatenated minified code. But given that I'm only doing any static HTML, what's the best way to manage this? (I realize I could hack something together with ERB or Perl, but I'm wondering if there are any standard solutions.) In particular, since I'm not doing any server-side HTML generation, is there an easy, semi-standard way of setting up my static HTML so that it contains code like <script src="js/vendors/jquery.js"></script> <script src="js/class_a.js"></script> <script src="js/class_b.js"></script> <script src="js/main.js"></script> at development time and <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src="js/entire_app.min.js"></script> for release?

    Read the article

  • Sharing authentication methods across API and web app

    - by Snixtor
    I'm wanting to share an authentication implementation across a web application, and web API. The web application will be ASP.NET (mostly MVC 4), the API will be mostly ASP.NET WEB API, though I anticipate it will also have a few custom modules or handlers. I want to: Share as much authentication implementation between the app and API as possible. Have the web application behave like forms authentication (attractive log-in page, logout option, redirect to / from login page when a request requires authentication / authorisation). Have API callers use something closer to standard HTTP (401 - Unauthorized, not 302 - Redirect). Provide client and server side logout mechanisms that don't require a change of password (so HTTP basic is out, since clients typically cache their credentials). The way I'm thinking of implementing this is using plain old ASP.NET forms authentication for the web application, and pushing another module into the stack (much like MADAM - Mixed Authentication Disposition ASP.NET Module). This module will look for some HTTP header (implementation specific) which indicates "caller is API". If the header "caller is API" is set, then the service will respond differently than standard ASP.NET forms authentication, it will: 401 instead of 302 on a request lacking authentication. Look for username + pass in a custom "Login" HTTP header, and return a FormsAuthentication ticket in a custom "FormsAuth" header. Look for FormsAuthentication ticket in a custom "FormsAuth" header. My question(s) are: Is there a framework for ASP.NET that already covers this scenario? Are there any glaring holes in this proposed implementation? My primary fear is a security risk that I can't see, but I'm similarly concerned that there may be something about such an implementation that will make it overly restrictive or clumsy to work with.

    Read the article

  • Identity Propagation for Web Service - 11g

    - by Prakash Yamuna
    I came across this post from Beimond on how to do identity propagation using OWSM.As I have mentioned in the past here, here and here - Beimond has a number of excellent posts on OWSM. However I found one part of his comment puzzling. I quote: "OWSM allows you to pass on the identity of the authenticated user to your OWSM protected web service ( thanks to OPSS ), this username can then be used by your service. This will work on one or between different WebLogic domains. Off course when you don't want to use OWSM you can always use Oracle Access Manager OAM which can do the same." The sentence in red highlights the issue i find puzzling. In fact I just discussed this particular topic recently here. So let me try and clarify on a few points: a) OAM is used for Web SSO. b) OWSM is used for securing Web Services. You cannot do identity propagation using OAM for Web Services. c) You use SAML to do identity propagation across Web Services. OAM also supports SAML - but that is the browser profile of SAML relevant in the context of Web SSO and is not related to the SAML Token Profile defined as part of the WS-Security spec.

    Read the article

  • DAO/Webservice Consumption in Web Application

    - by Gavin
    I am currently working on converting a "legacy" web-based (Coldfusion) application from single data source (MSSQL database) to multi-tier OOP. In my current system there is a read/write database with all the usual stuff and additional "read-only" databases that are exported daily/hourly from an Enterprise Resource Planning (ERP) system by SSIS jobs with business product/item and manufacturing/SCM planning data. The reason I have the opportunity and need to convert to multi-tier OOP is a newer more modern ERP system is being implemented business wide that will be a complete replacement. This newer ERP system offers several interfaces for third party applications like mine, from direct SQL access to either a dotNet web-service or a SOAP-like web-service. I have found several suitable frameworks I would be happy to use (Coldspring, FW/1) but I am not sure what design patterns apply to my data access object/component and how to manage the connection/session tokens, with this background, my question has the following three parts: Firstly I have concerns with moving from the relative safety of a SSIS job that protects me from downtime and speed of the ERP system to directly connecting with one of the web services which I note seem significantly slower than I expected (simple/small requests often take up to a whole second). Are there any design patterns I can investigate/use to cache/protect my data tier? It is my understanding data access objects (the component that connects directly with the web services and convert them into the data types I can then work with in my Domain Objects) should be singletons (and will act as an Adapter/Facade), am I correct? As part of the data access object I have to setup a connection by username/password (I could set up multiple users and/or connect multiple times with this) which responds with a session token that needs to be provided on every subsequent request. Do I do this once and share it across the whole application, do I setup a new "connection" for every user of my application and keep the token in their session scope (might quickly hit licensing limits), do I set the "connection" up per page request, or is there a design pattern I am missing that can manage multiple "connections" where a requests/access uses the first free "connection"? It is worth noting if the ERP system dies I will need to reset/invalidate all the connections and start from scratch, and depending on which web-service I use might need manually close the "connection/session"

    Read the article

  • Will rewriting your .htaccess to 404 to return search results from your site negatively effect your ranking in Google?

    - by leeand00
    Depending on the type of site that you are running, it may or may not be advantageous to display search results instead of a 404 page, when someone visits a non-existent page on your site. I believe that the site I've been maintaining recently would benefit from this as it is the site of a publication. With a publication the more people you can get to read your site the better. But after reading up on how Google ranks the "quality" of your site, where you will appear in SERPs, based on how much the meta text of a page relates to the content of the page, I have to wonder if making a 404 page link to the search results would harm the "quality" of your site in Google eyes.

    Read the article

  • Just Another Web Service (JAWS) vs SOA

    Over the last few years SOA has been a hot topic lending it to be abused by many that have no understanding of the concept. In my opinion, one of the largest issues facing SOA is the lack of understanding and experience implementing SOA by business and IT alike. I just recently deployed a new web services that is called by multiple service clients. Would you call this SOA because it is a web service that can be called by any requesting client? In my opinion, this is not SOA; instead it is Just Another Web Service (JAWS).  Just because a company creates a web service does not mean that they are using SOA, in fact it only means that they are using a web service. SOA is an architectural style that focuses on the design of systems based on the consumer and providers thorough the use of contracts.  With this approach SOA needs to be applied for the top down in order for it to reach its full potential. In the case of the web service, the service is just a small part of the entire system that is reusable and has the flexibility to change. In order for a company in this case to move towards SOA then they need to define business processes that can be shared through the use of reusable software and loose coupling. Once the company’s thought and development process change to address changes in this manner they can start to become more SOA.

    Read the article

  • Configuring WS-Security with PeopleSoft Web Services

    - by Dave Bain
    I was speaking with a customer a few days ago about PeopleSoft Web Services.  The customer created a web service but when they went to deploy it, they had so many problems configuring ws-security, they pulled the service.  They spent several days trying to get it working but never got it working so they've put it on hold until they have time to work through the issues. Having gone through the process of configuring ws-security myself, I understand the complexity.  There is no magic 'easy' button to push.  If you are not familiar with all the moving parts like policies, certificates, public and private keys, credential stores, and so on, it can be a daunting task.  PeopleBooks documentation is good but does not offer a step-by-step example to follow.  Fear not, for those that want more help, there is a place to go. PeopleSoft released a Mobile Inventory Management application over a year ago.  It is a mobile app built with Oracle Fusion Application Development Framework (ADF) that accesses PeopleSoft content through standard web services.  Part of the installation of this app is configuring ws-security for the web services used in the application.  Appendix A of the PeopleSoft FSCM91 Mobile Inventory Management Installation Guide is called Configuring WS-Security for Mobile Inventory Management.  It is a step-by-step guide to configure ws-security between a server running Oracle Web Server Management (OWSM) and PeopleSoft Integration Broker.  Your environment might be different, but the steps will be similar, and on the PeopleSoft side, Integration Broker will remain a constant. You can find the installation guide on Oracle Suport.  Sign in to https://support.us.oracle.com and search for document 1290972.1.  Read through Appendix A for more details about how to set up ws-security with PeopleSoft web services.

    Read the article

  • Possible for one developer to work on a site thats on another developer's server?

    - by cire4
    Sorry for the confusing title. Let me explain: I am currently trying to get a site developed. My current developer has taken the site about as far as I think they are capable of and I am planning on hiring another developer to put the finishing touches on it, debug it and upgrade some of the more technical details. The site is hosted on my current developer's server. They are scheduled to work on it until mid-April, at which point they will transfer the site to my server. I would like the new developer to get started on the upgrades to the site as soon as possible. So my question is this: Is it possible for the new developer to start working on upgrades to the site while it is still on the old developer's server (and without the old developer knowing about it)? Would the new developer have to create a mirror site and work on it that way? I'm having trouble imagining if this is possible so any advice you can offer would be much appreciated!

    Read the article

  • Breaking The Promise of Web Service Interoperability

    The promise of web service interoperability is achievable if certain technical and non-technical issues are dealt with properly. As the world gets smaller and smaller thanks to our growing global economy the need for security is increasing. The use of security is vital in the transferring of data from one server to another. As new security standards and protocols are created, the environments for web service hosts and clients must be in sync so that they can communicate on the same standard and protocols. For example, if a new protocol x can only be implemented on computers built after 2010 then all computers built prior to 2010 will not be able to connect to any web service hosts that only use this protocol in its security policy. If both the host and client of a web service cannot communicate using a set of common standards and protocols then web services are not available to these clients thus breaking the promise of interoperability. Another limiting factor of web services is governmental policies and regulations. I have experienced this first hand last year when I had to work on a project that dealt with personally identifiable information (PII) regarding US and Canadian Citizens. Currently the Canadian government regulates that any data pertaining to Canadian citizens must be store in Canada only. The issue that we had was that fact that we are a US based company that sometimes works with Canadian PII as part of a service that we provide. As you can see we are US based company and dealing with Canadian Data, so we had to place a file server inside the border of Canada in order for us to continue working for our Canadian customers.

    Read the article

  • Google indexed site's address by accident. What do I do now?

    - by AndrejaKo
    I was making a site for a friend of mine and he wanted to be able to see my progress as I worked on the site, so I decided to put the site on a server on my computer and enable access by a domain name registered to me. It turns out that I forgot to set up a robots.txt file for the site and somehow Google indexed the site. My question is: What do I do now? As I understand it, Google doesn't like duplicate content and my friend could have problems when I upload the new site to his server. Right now his current site, which only has a work in progress page, is first on Google when searching for relevant keywords and I really really don't want to damage that. Is there anything else I need to be concerned about?

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >