Search Results

Search found 56144 results on 2246 pages for 'web search'.

Page 184/2246 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Create ASP.NET 3.5 Sitemap XML for Navigational Web Controls

    It is important to create a user-friendly website. One aspect that defines a user friendly website is having clearly-defined navigation based on a web sitemap. In ASP.NET 3.5 there are called navigational web controls that are used to create and present navigation to website users. These navigational web controls depend on the website XML sitemap. This tutorial will illustrate how a developer can create this XML sitemap which can be used to power the web controls needed to present website navigation.... It?s Better Together Deploy Windows Server 2008 r2 with Windows 7 and get a host of special features.

    Read the article

  • Crawling for geotagged data

    - by abe3
    I have no experience with web crawlers -- but I know that Apache maintains an open source web crawler called "Lucene." How would I go about writing such a crawler to search the web for geo tagged data close to a particular location? What would a general road map look like? How do I pick which slice of the web to crawl? Do I use regular expressions to find things that look like longitudes and latitudes? What does a general sketch of that solution look like?

    Read the article

  • Blogger homepage won't update!

    - by Sims Siniron
    i am new on blogging webmaster Tools. When i usually add new post to my blog, it will automatic updated my homepage also. But from last 14th January, my homepage won't update by Google SEPR. As a result i am losing my popularity on SEPR. Previously when i post new article, 70-80% will go to the first page result. But after the problem occurs, none of them reach in top 15page of Google SEPR :( Last 1/12/12, Google webmaster sent me a "Notice of DMCA removal from Google Search" massage to indicates one of my URL that contains some infringing content which i deleted after receiving their notice. Not only that, i also cheeked all of my posts if there any additional infringing content available. After removing that, i fill out Google's Content Removed Notification form to notify them and Google also sent me a feedback that they received it and suggest "In the future, if you have removed the allegedly infringing content from your site (and won’t put it back), please use the correct form" which also i filled up. Now my question is that, Is everything alright which i did before? Although my new posts are indexed in GSEPR with ".." but why Google Robots.txt won't update my homepage which previously automatically updated when a new article was published.

    Read the article

  • How To Find Affordable Web Agency In India

    Finding an affordable web agency in India is a problem that one should not even think of. This is for the fact that every web agency is either way too expensive or if it makes a reasonable quote for ... [Author: John Anthony - Web Design and Development - May 16, 2010]

    Read the article

  • How should I handle a redirect to an identity provider during a web api data request

    - by Erds
    Scenario I have a single-page web app consisting purely of html, css, and javascript. After initial load and during use, it updates various views with data from one or more RESTful apis via ajax calls. The api calls return data in a json format. Each web api may be hosted on independent domains. Question During the ajax callout, if my authorization token is not deemed valid by the web api, the web api will redirect me (302) to the identity provider for that particular api. Since this is an ajax callout for data and not necessarily for display, i need to find a way to display the identity provider's authentication page. It seems that I should trap that redirect, and open up another view to display the identity provider's login page. Once the oauth series of redirects is complete, i need to grab the token and retrigger my ajax data call with the token attached. Is this a valid approach, and if so are there any examples showing the ajax handling of the redirects?

    Read the article

  • Is it wrong to tell mobile users to view a site on their computer?

    - by betamax
    I am creating a web application that doesn't work correctly on mobile. I don't want to make it work on mobile because I would rather mobile users have a fully integrated experience and not have to use the web version. This mobile version will be released at a later date based on reaction to the initial web-based version. So, my question is: Is it wrong to not allow mobile users to use the site and instead show them some sort of splash screen telling them to come back to the site on a computer?

    Read the article

  • how to call web method in java application?.

    - by user12344
    Hi, I have created java web application(Web Service). I want to call the setName() method in java application(GUI). how is call web method in application?. package sv; import javax.jws.WebMethod; import javax.jws.WebParam; import javax.jws.WebService; @WebService() public class MyService { @WebMethod(operationName = "setName") public String setName(@WebParam(name = "name") String name) { return "my string is "+ name; } }

    Read the article

  • Sharepoint .PDF contents displaying as 'searchtext.xml' in searches

    - by Green Muffins
    Hi Experts, I recently used installed ifilter in my sharepoint farm to enable searching of the contents of .pdf documents. All went well, except if I search for contents of any .pdf file, they appear in the search results with document title "searchtext.xml", and the link to the document gives a giant page of the .pdf contents in an .xml looking browser page. :s I have added .pdf filetypes to the search, so I am unsure why it is reading them incorrectly.. if I search for a .pdf document title such as 'document.pdf' it will display the result as a html page, though the link does follow to a readable .pdf file. Any help?

    Read the article

  • Should I go along with my choice of web hosting company or still search?

    - by Devner
    Hi all, I have been searching for a good website hosting company that can offer me all the services that I need for hosting my PHP & MySQL based website. Now this is a community based website and users will be able to upload pictures, etc. The hosting company that I have in mind, currently lets me do everything... let me use mail(), supports CRON jobs, etc. Of course they are charging about $6/month. Now the only problem with this company is that they have a limit of 50,000 files that can exist within the hosting account at any time. This kind of contradicts their frontpage ad of "UNLIMITED SPACE" on their website. Apart from this, I know of no other reason why I should not go with this hosting company. But my issue is that 50,000 file limit is what I cannot live with, once the users increase in significant number and the files they upload, exceed 50,000 in number. Now since this is a dynamic website and also includes sensitive issues like payments, etc. I am not sure if I should go ahead with this company as I am just starting out and then later switch over to a better hosting company which does not limit me with 50,000 files. If I need to switch over once I host with this company, I will need to take backups of all the files located in my account (jpg, zip, etc.), then upload them to the new host. I am not aware of any tools that can help me in this process. Can you please mention if you know any? I can go ahead with the other companies right now, but their cost is double/triple of the current price and they all sport less features than my current choice. If I pay more, then they are ready to accommodate my higher demands. Unfortunately, the company that I am willing to go with now, does NOT have any other higher/better plans that I can switch to. So that's the really really bad part. So my question(s): Since I am starting out with my website and since the scope of users initially is going to be less/small, should I go ahead with the current choice and then once the demand increases, switch over to a better provider? If yes, how can I transfer my database, especially the jpg files, etc. to the new provider? I don't even know the tools required to backup and restore to another host. (I don't like this idea but still..) Should I go ahead and pay more right now and go with better providers (without knowing if the website is going to do really that well) just for saving myself the trouble of having to take a backup of the 50,000 files and upload to a new host from an old host and just start paying double/triple the price without even knowing if I would receive back the returns as I expected? Backup and Restore in such a bulky numbers is something that I have never done before and hence I am stuck here trying to decide what to do. The price per month is also a considerable factor in my decision. All these web hosting companies say one common thing: It is customers responsibility to backup and restore data and they are not liable for any loss. So no matter what hosting company that I would like to go with, they ask me to take backup via FTP so that I can restore them whenever I want (& it seems to be safer to have the files locally with me). Some are providing tools for backup and some are not and I am not sure how much their backup tools can be trusted considering the disclaimers they have. I have never backed-up and restored 50,000 files from one web host to another, so please, all you experienced people out there, leave your comments and let me know your suggestions so that I can decide. I have spent 2 days fighting with myself trying to decide what to do and finally concluded that this is a double-edged sword and I can't arrive at a satisfactory final decision without involving others suggestions. I believe that someone must be out there who may have had such troublesome decision to make. So all your suggestions to help me make my decision are appreciated. Thank you all.

    Read the article

  • Want to know how a particular page can be searched by Google?

    - by Champ
    I want to know how or what are the keywords by which a page can be searches on Google. Is there any tool on web by which I can get keywords for the page I want to search. Eg. If we search test on Google we will find this . Now what do i have to search(keywords) to find a particular page lets say abc.com/test.php Is there any tool by which i can get those keywords? Sorry if I am not clear with the question?

    Read the article

  • Lotus Notes: Searching email by fields

    - by themel
    I'm using Lotus Notes 8.5.2 in a large corporate deployment. I'm trying to figure out how to search my email in a structured manner, e.g. by specifying criteria on fields. The help seems to suggest that I can use fields in square brackets and a list of operators, e.g. to find all mail where the From field contains John, I'd search for /[From] CONTAINS John However, I can't get this to work - any operator style query I've tried returns zero documents. "Web-style" queries (e.g. typing John into the search dialog) work, but I'd really prefer a way that would let me search more precisely. Potential issues: I'm assuming that the field names can be taken from the list of things I see when I open a mail and look at its Document Properties. Full text indexing is turned off for my mailbox, and all my attempts to create my own have failed. Does anyone have better information on searching by from/date/subject conditions in Notes?

    Read the article

  • Improvements to ASP.NET Web Forms

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/05/19/improvements-to-asp.net-web-forms.aspxContrary to what the prophets of gloom might say, the ASP.NET team at Microsoft are continuing to develop web forms. Please see the article at http://blogs.msdn.com/b/webdev/archive/2014/05/13/improvements-to-asp-net-web-forms.aspx The bulk of these changes should be part of VS2013 Update 2 which is now available as an RTM.

    Read the article

  • À la découverte d'ASP.NET Web API, un article de Hinault Romaric

    Bonjour, Je suis heureux de vous annoncer la publication de mon nouvel article qui présente ASP.NET Web API, la nouveauté sans doute la plus passionnante d'ASP.NET MVC 4. Citation: Au vu des besoins d'interaction de plus en plus croissants entre une application Web et un ensemble très large de clients (réseaux sociaux, navigateurs, terminaux mobiles, applications natives, etc.), il était indispensable de fournir aux développeurs un moyen de communiquer dans leurs applications avec ces différents types de clients. Web AP...

    Read the article

  • Is having a single `IndexWriter` instance in Lucene a good idea?

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea (make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • Multiple vulnerabilities in Firefox web browser

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-3062 Numeric Errors vulnerability 6.8 Firefox web browser Solaris 11 11/11 SRU 9.5 Solaris 10 SPARC: 145080-11 X86: 145081-10 CVE-2012-0467 Denial of service (DoS) vulnerability 10.0 CVE-2012-0468 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-0469 Resource Management Errors vulnerability 10.0 CVE-2012-0470 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-0471 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-0473 Numeric Errors vulnerability 5.0 CVE-2012-0474 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-0477 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-0478 Permissions, Privileges, and Access Controls vulnerability 9.3 CVE-2012-0479 Identity spoofing vulnerability 4.3 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Are scheduled job servers the right choice for a time sensitive game engine?

    - by maple_shaft
    I am currently architecting and designing an exciting new web application that will be entering into some areas that I have very little experience in, game development. The application is not necessarily a game, but there are some very time sensitive tasks and scheduled jobs that a server will need to run to perform game related activities (Eg. New match up starts at noon every day for a 12 day tournament, updating scoreboards at 5pm every day, etc...) In the past I have typically used cron jobs with the Quartz Scheduler running within a web application server, but I know that this isn't likely a scalable solution for the truly massive userbase that management is telling me to expect (Granted they are management and are probably highly optimistic about this) and also for how important the role of these tasks are in this web application. The other important thing I want to consider is that I want to avoid SPOF (Single Point Of Failure). If the primary job server goes down, another job server should be able to successfully run the job in its place. I suppose this can be done appropriately record locking and database transactions. My question is if scheduled jobs like CRON running on a web application server are a wise design choice given the time sensitive game tasks of this application, or is there something more appropriate for running a scalable game engine parallel to the web application servers?

    Read the article

  • Utilisez-vous le framework Web JBoss Seam destiné à simplifier le développement d'applications Web ? Partagez votre expérience

    L'équipe Java vous propose un débat concernant le framework Web JBoss Seam. Ce framework, disponible depuis début 2005 et proposé par Gaving King le créateur d'Hibernate, veut simplifier le développement d'applications Web. Pour cela Seam se base sur les standards EJB3 et JSF proposés par Java EE et se focalise à réduire la complexité de ces différentes briques (voir article Présentation globale de Seam pour les principes de base). Aujourd'hui Seam atteint la version 3 et offre de nombreuses avancées pour simplifier le développement Web. En parallèle, de nombreux framework Web ont déjà su s'...

    Read the article

  • Adding multiple data importers support to web applications

    - by DigiMortal
    I’m building web application for customer and there is requirement that users must be able to import data in different formats. Today we will support XLSX and ODF as import formats and some other formats are waiting. I wanted to be able to add new importers on the fly so I don’t have to deploy web application again when I add new importer or change some existing one. In this posting I will show you how to build generic importers support to your web application. Importer interface All importers we use must have something in common so we can easily detect them. To keep things simple I will use interface here. public interface IMyImporter {     string[] SupportedFileExtensions { get; }     ImportResult Import(Stream fileStream, string fileExtension); } Our interface has the following members: SupportedFileExtensions – string array of file extensions that importer supports. This property helps us find out what import formats are available and which importer to use with given format. Import – method that does the actual importing work. Besides file we give in as stream we also give file extension so importer can decide how to handle the file. It is enough to get started. When building real importers I am sure you will switch over to abstract base class. Importer class Here is sample importer that imports data from Excel and Word documents. Importer class with no implementation details looks like this: public class MyOpenXmlImporter : IMyImporter {     public string[] SupportedFileExtensions     {         get { return new[] { "xlsx", "docx" }; }     }     public ImportResult Import(Stream fileStream, string extension)     {         // ...     } } Finding supported import formats in web application Now we have importers created and it’s time to add them to web application. Usually we have one page or ASP.NET MVC controller where we need importers. To this page or controller we add the following method that uses reflection to find all classes that implement our IMyImporter interface. private static string[] GetImporterFileExtensions() {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;       var extensions = new Collection<string>();     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           foreach (var extension in instance.SupportedFileExtensions)         {             if (extensions.Contains(extension))                 continue;               extensions.Add(extension);         }     }       return extensions.ToArray(); } This code doesn’t look nice and is far from optimal but it works for us now. It is possible to improve performance of web application if we cache extensions and their corresponding types to some static dictionary. We have to fill it only once because our application is restarted when something changes in bin folder. Finding importer by extension When user uploads file we need to detect the extension of file and find the importer that supports given extension. We add another method to our page or controller that uses reflection to return us importer instance or null if extension is not supported. private static IMyImporter GetImporterForExtension(string extensionToFind) {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           if (instance.SupportedFileExtensions.Contains(extensionToFind))         {             return instance;         }     }       return null; } Here is example ASP.NET MVC controller action that accepts uploaded file, finds importer that can handle file and imports data. Again, this is sample code I kept minimal to better illustrate how things work. public ActionResult Import(MyImporterModel model) {     var file = Request.Files[0];     var extension = Path.GetExtension(file.FileName).ToLower();     var importer = GetImporterForExtension(extension.Substring(1));     var result = importer.Import(file.InputStream, extension);     if (result.Errors.Count > 0)     {         foreach (var error in result.Errors)             ModelState.AddModelError("file", error);           return Import();     }     return RedirectToAction("Index"); } Conclusion That’s it. Using couple of ugly methods and one simple interface we were able to add importers support to our web application. Example code here is not perfect but it works. It is possible to cache mappings between file extensions and importer types to some static variable because changing of these mappings means that something is changed in bin folder of web application and web application is restarted in this case anyway.

    Read the article

  • Event Handlers and Automatic Postback in ASP.NET 3.5 Web Controls

    In one of last week s tutorials Creating Database-Driven ASP.NET 3.5 Input and List Web Controls you learned how to create a dynamic input web control that instead of setting values statically stored its list and values directly from the MS SQL server 2 8 database. This tutorial is a sequel to that article. It deals mostly with the server side coding aspect of dynamic web controls. It is recommended that you read the earlier tutorial first as the Visual Web Developer Project in that tutorial will be used extensively in this article.... Download a Free Trial of Windows 7 Reduce Management Costs and Improve Productivity with Windows 7

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >