Search Results

Search found 11671 results on 467 pages for 'man pages'.

Page 399/467 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • Extension methods in class library project

    - by Mostafa
    I've implemented some extension methods and put those in separate Class Library project. Imagine I have a simple extension method like this in class library called MD.Utility: namespace MD.Utility { public static class ExtenMethods { public static bool IsValidEmailAddress(this string s) { Regex regex = new Regex(@"^[\w-\.]+@([\w-]+\.)+[\w-]{2,4}$"); return regex.IsMatch(s); } } } But nowhere in WebApp like App_code folder or WebFroms code-behind page I can't use this Extension method. If I do like this: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using MD.Utility; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string email = "[email protected]"; if (email.IsValidEmailAddress()) { //To do } } } The compiler doesn't recognize IsValidEmailAddress() and even no intellisense support. While if I put my extension method in App_Code folder it's ok for using in another cs file in App_code Folder or Web Form code-behind pages.

    Read the article

  • TemplateField button causing GridView Invalid Postback

    - by Carter
    Ok, so I've got a template field in a gridview that contains just a simple button... <asp:GridView ID="KeywordsGridView" AllowPaging="false" AutoGenerateColumns="false" BackColor="white" GridLines="None" HeaderStyle-CssClass="Table_Header" RowStyle-CssClass="Table_Style" runat="server"> <Columns> <asp:TemplateField> <ItemTemplate> <asp:Button runat="server" /> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField="References" SortExpression="References" HeaderText="Total References" /> <asp:BoundField DataField="Keyword" SortExpression="Keyword" HeaderText="Keyword" /> </Columns> </asp:GridView> Whenever I click the button I get the error... Invalid postback or callback argument. Event validation is enabled using <pages enableEventValidation="true"/> in configuration or <%@ Page EnableEventValidation="true" %> in a page. For security purposes, this feature verifies that arguments to postback or callback events originate from the server control that originally rendered them. If the data is valid and expected, use the ClientScriptManager.RegisterForEventValidation method in order to register the postback or callback data for validation. I've found a decent amount of articles referencing this issue, including a couple on SO, for example... http://stackoverflow.com/questions/228969/asp-net-invalid-postback-or-callback-argument-event-validation-is-enabled-usi and... http://stackoverflow.com/questions/103560/invalid-postback-or-callback-argument I might just be misunderstanding, but as far as I can tell they don't really help me. How do I get this to go away without setting enableEventValidation="false"?

    Read the article

  • Can you force a crash if a write occurs to a given memory location with finer than page granularity?

    - by Joseph Garvin
    I'm writing a program that for performance reasons uses shared memory (alternatives have been evaluated, and they are not fast enough for my task, so suggestions to not use it will be downvoted). In the shared memory region I am writing many structs of a fixed size. There is one program responsible for writing the structs into shared memory, and many clients that read from it. However, there is one member of each struct that clients need to write to (a reference count, which they will update atomically). All of the other members should be read only to the clients. Because clients need to change that one member, they can't map the shared memory region as read only. But they shouldn't be tinkering with the other members either, and since these programs are written in C++, memory corruption is possible. Ideally, it should be as difficult as possible for one client to crash another. I'm only worried about buggy clients, not malicious ones, so imperfect solutions are allowed. I can try to stop clients from overwriting by declaring the members in the header they use as const, but that won't prevent memory corruption (buffer overflows, bad casts, etc.) from overwriting. I can insert canaries, but then I have to constantly pay the cost of checking them. Instead of storing the reference count member directly, I could store a pointer to the actual data in a separate mapped write only page, while keeping the structs in read only mapped pages. This will work, the OS will force my application to crash if I try to write to the pointed to data, but indirect storage can be undesirable when trying to write lock free algorithms, because needing to follow another level of indirection can change whether something can be done atomically. Is there any way to mark smaller areas of memory such that writing them will cause your app to blow up? Some platforms have hardware watchpoints, and maybe I could activate one of those with inline assembly, but I'd be limited to only 4 at a time on 32-bit x86 and each one could only cover part of the struct because they're limited to 4 bytes. It'd also make my program painful to debug ;)

    Read the article

  • Can I have two separate projects, 1 WebForms and 1 ASP.NET MVC, to both point to the same domain?

    - by Hamman359
    Is it possible to setup two separate projects, 1 WebForms and 1 ASP.NET MVC, to both point to the same domain? i.e. both point to different pages within www.somesite.com. Here's some background on the application and why I'm asking. This is a brownfield application that is currently 2.0 WebForms and is full of WebFormy 'goodness' (i.e. ObjectDataSources, FormView controls, UpdatePanels, etc...) There are lost of other 'fun' things in the code base like 600+ Stored Procedures and 200+ line methods in the business layer code that get data from the DB via stored proc, do some processing on the data, build an HTML string using string concatenation and then return that string to the UI layer. What we are planning on doing is developing new features in MVC and slowly converting the existing features over to MVC one at a time. As part of this transition, we will also be re-writing the layers below the UI to clean up the mess there and to do things like replace the stored procedures with NHibernate and introduce an IOC container. I know that you can run WebForms and MVC side-by-side in the same project, however, because we will be making wholesale changes to the way we do many things throughout our entire development stack, I'd like the new stuff to be a completely separate project within the solution. This should help serve as very visual reminder that this is a different way of doing things than before and make it easier to remove the old code as it is no longer needed. What I don't know is, is this even possible? Can two separate project point to the same domain? Here's an quick example of what I'm thinking: www.somesite.com/orders.aspx?id=123 (Orders page from existing WebForms project) www.somesite.com/customer/987 (Customer page from new MVC project)

    Read the article

  • How to elegantly handle ReturnUrl when using UrlRewrite in ASP.NET 2.0 WebForms

    - by Brian Kim
    I have a folder with multiple .aspx pages that I want to restrict access to. I have added web.config to that folder with <deny users="?"/>. The problem is that ReturnUrl is auto-generated with physical path to the .aspx file while I'm using UrlRewrite. Is there a way to manipulate ReturnUrl without doing manual authentication check and redirection? Is there a way to set ReturnUrl from code-behind or from web.config? EDIT: The application is using ASP.NET 2.0 WebForms. I cannot use 3.5 routing. EDIT 2: It seems like 401 status code is never captured. It returns 302 for protected page and redirects to login page with ReturnUrl. It does not return 401 for protected page. Hmm... Interesting... Ref: http://msdn.microsoft.com/en-us/library/aa480476.aspx This makes things harder... I might have to write reverse rewrite mapping rules to regex match ReturnUrl and replace it if it doesn't return 401... If it does return 401 I can either set RawUrl to Response.RedirectLocation or replace ReturnUrl with RawUrl. Anyone else have any other ideas?

    Read the article

  • TextBox doesn't fire TextChanged Event on IE 8, AutoPostback is true

    - by MaikoID
    Hi guys, I have the same thing, there are many TextBoxes with the event TextChanged set and with AutoPostback = true, and works in all browsers (Chrome, Opera, Firefox 3.6) except in IE 8, IE 6/7 I didn't test. I don't want to put the onblur event in all my TextBoxs because there are many pages with many TextBox that use this event. Description I'm using a masterPage, in the aspx i have <asp:TextBox ID="txtCnpj" runat="server" CssClass="txt" Width="200px" onkeyup="Mascara(this,Cnpj)" onkeydown="Mascara(this,Cnpj)" MaxLength="18" AutoPostBack="true" ValidationGroup="txtCnpj" OnTextChanged="txtCnpj_TextChanged"></asp:TextBox> in the aspx.cs protected void txtCnpj_TextChanged(object sender, EventArgs e) { if (CredorInvestimento.GetCredorInvestimento(txtCnpj.Text) != null) { ((TextBox)sender).Text = ""; ((TextBox)sender).Focus(); rfvCnpj.ErrorMessage = "Duplicado"; Page.Validate(txtCnpj.ID); } else txtNome.Focus(); } Thanks! ps: I really doesn't like of asp.net I spend more time fixing errors than developing new functions. ps: sorry for my english. ps: if i remove the onkeydown and onkeyup events the textchanged fire in IE, but i realy this events too.

    Read the article

  • Strange behaviour using Drag and Drop in word 2003 automation in headers

    - by Oliver Hanappi
    Hi! I am developing a template based addin for Word 2003 which allows the user to drag and drop elements from a listbox into the word document. Unfortunately I'm getting a really strange behaviour when trying to drop elements in the document's header. Open the template and type something in the header Close the header and insert some content on the page Add a page break. Switch to page layout mode where and set zoom level to "Two Pages" Open the header Slowly Drag and Drop an list item from the list box to the header. See multiple Page Setups dialogs occur which cause Word to crash. Here is my code: // in ThisDocument.cs public MyUserControl _control; public void Init() { _control = new MyUserControl(); ActionsPane.Controls.Add(_control); ActionsPane.Visible = true; } // in MyUserControl.cs public void listBox1_MouseDown(object sender, MouseEventArgs e) { DoDragDrop("something", DragDropEffects.Copy); } Have I done somethinkg wrong with implementing Drag and Drop? Is there a workaround for this strange behaviour? Thanks in advance, Oliver Hanappi

    Read the article

  • How to use routing in a ASP MVC website to localize in two languages - But keeping exiting URLs

    - by Anders Pedersen
    We have a couple ASP MVC websites just using the standard VS templates default settings - Working as wanted. But now I want to localize these website ( They are now in Dutch and I will add the English language ) I would like to use routing and not Resource because: 1. Languages will differ in content, numbers of pages, etc. 2. The content is mostly text. I would like the URLs to look some thing like this - www.domain.com/en/Home/Index, www.domain.nl/nl/Home/Index. But the last one should also work with - www.domain.nl/Home/Index - Witch is the exciting URLs. I have implemented Phil Haacks areas ViewEngine from this blogpost - http://haacked.com/archive/2008/11/04/areas-in-aspnetmvc.aspx. But only putting the English website in the areas and keeping the Dutch in old structure. Witch are served as Phils default fallback. But the problem is here that I have to duplicate my controllers for both language's. So I tried the work method described in this tread - http://stackoverflow.com/questions/1712167/asp-net-mvc-localization-route. It works OK with the ?en? and /nl/ but not with the old URLs. When using this code in the global.asax the URL without the culture isn't working. public static void RegisterRoutes(RouteCollection routes) { //routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{culture}/{controller}/{action}/{id}", // URL with parameters new { culture = "nl-NL", controller = "Home", action = "Index", id = "" } // Parameter defaults ); routes.MapRoute( "DefaultWitoutCulture", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); } I properly overlooking some thing simple but I can't get this to work for me. Or are there a better way of doing this?

    Read the article

  • Visual Studio 2008 IDE freezes/crashes when opening .aspx file with css included

    - by Kai
    I have read a lot of questions about Visual Studio 2008 crashing on viewing some source files. However, I still can't fix this problem. Visual Studio (SP1) runs fine until I try and view .aspx source files with the lines <style type="text/css"> </stlye> anywhere in them, upon which it freezes (i.e is totally unresponsive) and I have to use the task manager to shut it down. I have systematically deleted and re-included all other code and it comes down to these two lines, which is very confusing. Sometimes it happens as soon as the lines are added, sometimes it doesn't freeze until I build the solution with any of the problem pages open. I can add external style sheets, and it only started recently. I tried the event viewer logs but I don't really understand how to use them to find out about this. I had Resharper 4.5 installed and have since uninstalled it, and do not have anything else installed. Is there any way I can a) find out what's happening, b) fix it without reinstalling vs?

    Read the article

  • Dynamically load Jquery into .js page

    - by RussP
    Please excuse me if I'm simple here, I want to create a simple widget that people can access from their websites - e.g by copy/past something like <script language="javascript" src="test2.js"></script> <div id="test"></div> anywhere in their web pages. where is dynamically filled via Jquery and the functions in/on test2.js. I can do it if JQuery is actually "printed" on the page of test2.js, but I cannot get any JQuery functions to work if I try to include/call JQuery dynamically. How do you call JQuery via javascript and then get it to work with the functions on the page? And/Or is there an easy way to add the <div id="test"></div> dynamically aswell? Sure I can body.append etc. but that only adds at the bottom of the page. Is there a way to .append in the position where the script include <script language="javascript" src="test2.js"></script> is actually placed? Hope I make sence.

    Read the article

  • ViewModel updates after Model server roundtrip

    - by Pavel Savara
    I have stateless services and anemic domain objects on server side. Model between server and client is POCO DTO. The client should become MVVM. The model could be graph of about 100 instances of 20 different classes. The client editor contains diverse tab-pages all of them live-connected to model/viewmodel. My problem is how to propagate changes after server round-trip nice way. It's quite easy to propagate changes from ViewModel to DTO. For way back it would be possible to throw away old DTO and replace it whole with new one, but it will cause lot of redrawing for lists/DataTemplates. I could gather the server side changes and transmit them to client side. But the names of fields changed would be domain/DTO specific, not ViewModel specific. And the mapping seems nontrivial to me. If I should do it imperative way after round-trip, it would break SOC/modularity of viewModels. I'm thinking about some kind of mapping rule engine, something like automappper or emit mapper. But it solves just very plain use-cases. I don't see how it would map/propagate/convert adding items to list or removal. How to identify instances in collections so it could merge values to existing instances. As well it should propagate validation/error info. Maybe I should implement INotifyPropertyChanged on DTO and try to replay server side events on it ? And then bind ViewModel to it ? Would binding solve the problems with collection merges nice way ? Is EventAgregator from PRISM useful for that ? Is there any event record-replay component ? Is there better client side pattern for architecture with server side logic ?

    Read the article

  • Where can I find a quick reference for standard Basic?

    - by Steve314
    The reason? Pure nostalgia. Anyway, there was a standard for Basic that was published in the late 80s or early 90s. It was probably ISO/IEC 10279:1991, but I don't have access to that and cannot be sure. Whatever this standard was, some of the syntax made its way into Borlands Turbo Basic and Microsofts Visual Basic. I never learned any significant amount of VB, but Turbo Basic is one of those things I played with in my mis-spent youth. At one time, my main reference was an article published in one of the main programming periodicals - maybe Personal Computer World, maybe Byte. A scan of that article (if anyone can even identify it) would be great, but all I really want is a few pages quick reference of that standard syntax. Must be free (I'm not that nostalgic), but it must describe the standard syntax - the whole point is to sort out what is standard as opposed to VB or whatever. EDIT The more I think about this, the more convinced I am that this standard was available around 1987 or 1988. Maybe it was the earlier non-full version of the standard above, or maybe it was pre-acceptance of the standard.

    Read the article

  • How can I test that my hash function is good in terms of max-load?

    - by philcolbourn
    I have read through various papers on the 'Balls and Bins' problem and it seems that if a hash function is working right (ie. it is effectively a random distribution) then the following should/must be true if I hash n values into a hash table with n slots (or bins): Probability that a bin is empty, for large n is 1/e. Expected number of empty bins is n/e. Probability that a bin has k collisions is <= 1/k!. Probability that a bin has at least k collisions is <= (e/k)**k. These look easy to check. But the max-load test (the maximum number of collisions with high probability) is usually stated vaguely. Most texts state that the maximum number of collisions in any bin is O( ln(n) / ln(ln(n)) ). Some say it is 3*ln(n) / ln(ln(n)). Other papers mix ln and log - usually without defining them, or state that log is log base e and then use ln elsewhere. Is ln the log to base e or 2 and is this max-load formula right and how big should n be to run a test? This lecture seems to cover it best, but I am no mathematician. http://pages.cs.wisc.edu/~shuchi/courses/787-F07/scribe-notes/lecture07.pdf BTW, with high probability seems to mean 1 - 1/n.

    Read the article

  • Wordpress "Vote It Up" plugin help

    - by Sparks Memphis
    I'm creating a dig like site with wordpress, TDO Mini forms and Vote it up where people who live in the city can log on, post their ideas of how to improve the city and have others vote. on the home page there are 2 columns. right is the TDO submit form. left are the "Most Wanted" items. the posts with the highest rating. I've found an impressive lack of information on this plugin. im not great with php but i can make do most of the time. I want to output the post title, post author, links for yes and no votes as well as have the titles in the left column list in most popular order. preferably id like the home pages to have only the top 5 or so highest rated posts. I cant find any way to output that information as i need. I was really hoping there was a simple way to call the highest rated post titles in the loop for the main page but there doesnt seem to be a way. Bout the only thing i've found is the DisplayVotes tag which isnt incredibly helpful. does anybody know how i can accomplish this, or can provide some expert advise? help would be immensely appreciated.

    Read the article

  • Sharepoint isn't accepting new Credentials initially when switching users.

    - by Tiziani
    Hi all, I have a standard website (one webapplication and one site collection) with some custom pages and webparts. The issue I'm having is that when I try to switch users, using the "Sign In As a Different User" and entering new credentials (even for another site collection admin account), IE tries the account three times, and then it presents a 401 Access Denied screen. After that, if I erase all the stuff of access denied page from the browser's url, I'm logged as the new account I just had entered and was not accepted. After researching for a while on google, I found a KB ( http://support.microsoft.com/kb/970814 ) that might relate, but just tested here and it didn't work at all. The modified method suggested by the KB is the following: function LoginAsAnother(url, bUseSource) { document.cookie="loginAsDifferentAttemptCount=0"; if (bUseSource=="1") { GoToPage(url); } else { //var ch=url.indexOf("?") =0 ? "&" : "?"; //url+=ch+"Source="+escapeProperly(window.location.href); //STSNavigate(url); document.execCommand("ClearAuthenticationCache"); } } But after making this change, it no longer asks for a new credential. Any ideas?

    Read the article

  • git pull crashes after other member push something

    - by naiad
    Here it's the story... we have a Github account. I clone the repository ... then I can work with it, commit things, push things, etc ... I use Linux with command line and git version 1.7.7.3 Then other user, using Eclipse and git plugin for eclipse eGit 1.1.0 pushes something, and it appears in the github web pages as the last commit, but when I try to pull: $ git pull remote: Counting objects: 13, done. remote: Compressing objects: 100% (6/6), done. remote: Total 9 (delta 2), reused 7 (delta 0) Unpacking objects: 100% (9/9), done. error: unable to find 3e6c5386cab0c605877f296642d5183f582964b6 fatal: object 3e6c5386cab0c605877f296642d5183f582964b6 not found "3e6c5386cab0c605877f296642d5183f582964b6" is the commit hash of the last commit, done by the other user ... there's no problem at all to browse it through web page ... but for me it's impossible to pull it. It's strange, because my command line output tells about that commit hash, so it knows that one is the last one commit in the github system, but my git can not pull it ! Maybe the git protocol used in eGit is incompatible with the console git 1.7.7.3...

    Read the article

  • Flexible design - customizable entity model, UI and workflow

    - by Ngm
    Hi All, I want to achieve the following aspects in the software I am building: 1. Customizable entity model 2. Customizable UI 3. Customizable workflow I have thought about an approach to achieve this, I want you to review this and make suggestions: Entity objects should be plain objects and will hold just data Separate Entity model and DB Schema by using an framework (like NHibernate?). This will allow easy modification of entity objects. Business logic to fetch/modify entities has to be granular enough so that they can be invoked as part of the workflow. Business objects should not hold any state, and hence will contain only static methods The workflow will decide depending upon the "state" of an entity/entities which methods on business object/objects to invoke. The workflow should obtain the results of the processing and then pass on the business objects to the appropriate UI screen. The UI screen has to contain instructions about how to display a given entity/entites. Possibly the UI has to be generated dynamically based on a set of UI instructions. (like XUL) What do you think about this approach? Suggest which existing frameworks (like NHiberante, Window Workflow) fit into this model, so that I will not spend time on coding these frameworks Also suggest is there any asp.net framework that can generate dynamic asp.net ajax pages based on a set of UI instructions (like Mozilla XUL)? I have recently been exploring Apache Ofbiz and was impressed by its ability to customize most areas of the application: UI, workflow, entities. Is there any similar (not necessarily an ERP system) application developed in C#/.Net which offers a similar level of customization? I am looking for examples of applications developed in C# that are highly customizable in terms of UI, Workflow and Entity Model

    Read the article

  • How to arrange business logic in a Kohana 3 project

    - by Pekka
    I'm looking for advice, tutorials and links at how to set up a mid-sized web application with Kohana 3. I have implemented MVC patterns in the past but never worked against a "formalized" MVC framework so I'm still getting my head around the terminology - toying around with basic examples, building views and templates, and so on. I'm progressing fairly well but I want to set up a real-world web project (one of my own that I've been planning for quite some time now) as a learning object. I learn best by example, but example-based documentation is a bit sparse for Kohana 3 right now - they say so themselves on the site. While I'm not worried about learning the framework as I go along, I want to make sure the code base is healthily structured from the start - i.e. controllers are split nicely, named well and according to standards, and most importantly the business logic is separated into appropriately sized models. My application could, in its core, be described as a business directory with a range of search and listing functions, and a login area for each entry owner. The actual administrative database backend is already taken care of. Supposing I have all the API worked out and in place already - list all businesses, edit business, list businesses by street name, create offer logged in as business, and so on, and I'm just looking for how to fit the functionality into a MVC pattern and into a Kohana application structure that can be easily extended. Do you know real-life examples of "database-heavy" applications like directories, online communities... with a log-in area built on Kohana 3, preferably Open Source so I could take a peek how they do it? Are there conventions or best practices on how to structure an extendable login area for end users in a Kohana project that is not only able to handle a business directory page, but further products on separate pages as well? Do you know any good resources on building complex applications with Kohana? Have you built something similar and could give me recommendations on a project structure?

    Read the article

  • ASP.NET MVC on Cassini: How can I force the "content" directory to return 304s instead of 200s?

    - by Portman
    Scenario: I have an ASP.NET MVC application developed in Visual Studio 2008. There is a root folder named "Content" that stores images and stylesheets. When I run locally (using Cassini) and browse my application, every resource from the "Content" directory is always downloaded. Using Firebug, I can verify that the web server returns an HTTP 200 ("ok"). Desired: I would like for Cassini to return HTTP 304 ("not modified") instead of 200. This is the behavior when running the site under IIS7. Reasoning: The site I am working on has a large number of static resources (often as many as 40 per page). Browsing the site is very fast on IIS7, because these resources are (correctly) cached by the browser. However, browsing the site on my local machine is painfully slow. Pages that render in under 1 second on IIS7 take over 30 seconds to render on Cassini. It's actually faster for me to upload the entire website every few minutes and test from there. (Yes, I recognize that this is perverse and crazy.) So: how can I instruct/trick Cassini into treating the "Content" directory like IIS7 does?

    Read the article

  • How do I escape ampersands in batch files?

    - by Peter Mortensen
    How do I escape ampersands in a batch file (or from the Windows command line) in order to use the start command to open web pages with ampersands in the URL? Double quotes will not work with start; this starts a new command line window instead. Update 1: Wael Dalloul's solution works. In addition, if there are URL encoded characters (e.g. space is encoded as %20) in the URL and it is in a batch file then '%' must be encoded as '%%'. This is not the case in the example. Example, from the command line (CMD.EXE): start http://www.google.com/search?client=opera&rls=en&q=escape+ampersand&sourceid=opera&ie=utf-8&oe=utf-8 will result in http://www.google.com/search?client=opera being opened in the default browser and these errors in the command line window: 'rls' is not recognized as an internal or external command, operable program or batch file. 'q' is not recognized as an internal or external command, operable program or batch file. 'sourceid' is not recognized as an internal or external command, operable program or batch file. 'ie' is not recognized as an internal or external command, operable program or batch file. 'oe' is not recognized as an internal or external command, operable program or batch file. Platform: Windows XP 64 bit SP2.

    Read the article

  • Drupal with clean urls turned on is putting question marks in URL

    - by aussiegeek
    I have a drupal site with clean urls, the pages load correctly, but then the URL is rewritten, which I really don't to happen. My .htaccess is: <IfModule mod_rewrite.c> RewriteEngine on # If your site can be accessed both with and without the 'www.' prefix, you # can use one of the following settings to redirect users to your preferred # URL, either WITH or WITHOUT the 'www.' prefix. Choose ONLY one option: # # To redirect all users to access the site WITH the 'www.' prefix, # (http://example.com/... will be redirected to http://www.example.com/...) # adapt and uncomment the following: # RewriteCond %{HTTP_HOST} ^example\.com$ [NC] # RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301] # # To redirect all users to access the site WITHOUT the 'www.' prefix, # (http://www.example.com/... will be redirected to http://example.com/...) # uncomment and adapt the following: # RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] # RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] # Modify the RewriteBase if you are using Drupal in a subdirectory or in a # VirtualDocumentRoot and the rewrite rules are not working properly. # For example if your site is at http://example.com/drupal uncomment and # modify the following line: # RewriteBase /drupal # # If your site is running in a VirtualDocumentRoot at http://example.com/, # uncomment the following line: RewriteBase / # Rewrite URLs of the form 'x' to the form 'index.php?q=x'. RewriteCond %{REQUEST_URI} !(connect|administration) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] </IfModule>

    Read the article

  • Facebook and retrieving a users "wall"

    - by Neurofluxation
    I have been given the dubious task of working with the Facebook API. I have managed to get quite far with all the little bits and pieces (bringing in "fan pages" and "friend lists"). However, I cannot seem to import a users wall into my App using SOLELY Javascript. I have this code to bring in the users friends: var widget_div = document.getElementById("profile_pics"); FB.ensureInit(function () { FB.Facebook.get_sessionState().waitUntilReady(function() { FB.Facebook.apiClient.friends_get(null, function(result) { var markup = ""; var num_friends = result ? Math.min(100, result.length) : 0; if (num_friends > 0) { for (var i=0; i<num_friends; i++) { markup += "<div align='left' class='commented' style='background-color: #fffbcd; border: 1px solid #9d9b80; padding: 0px 10px 0px 0px; margin-bottom: 5px; width: 75%; height: 50px; font-size: 16px;'><fb:profile-pic size='square' uid='"+result[i]+"' facebook-logo='true'></fb:profile-pic><div style='float: right; padding-top: 15px;'><fb:name uid='"+result[i]+"'></fb:name></div></div>"; } } widget_div.innerHTML = markup; FB.XFBML.Host.parseDomElement(widget_div); }); }); }); /*******YOUR FRIENDS******/ FB.XFBML.Host.parseDomTree(); Any idea whether I can change this to retrieve the Walls? Thanks in advance you great people! ^_^

    Read the article

  • Slowdowns when reading from an urlconnection's inputstream (even with byte[] and buffers)

    - by user342677
    Ok so after spending two days trying to figure out the problem, and reading about dizillion articles, i finally decided to man up and ask to for some advice(my first time here). Now to the issue at hand - I am writing a program which will parse api data from a game, namely battle logs. There will be A LOT of entries in the database(20+ million) and so the parsing speed for each battle log page matters quite a bit. The pages to be parsed look like this: http://api.erepublik.com/v1/feeds/battle_logs/10000/0. (see source code if using chrome, it doesnt display the page right). It has 1000 hit entries, followed by a little battle info(lastpage will have <1000 obviously). On average, a page contains 175000 characters, UTF-8 encoding, xml format(v 1.0). Program will run locally on a good PC, memory is virtually unlimited(so that creating byte[250000] is quite ok). The format never changes, which is quite convenient. Now, I started off as usual: //global vars,class declaration skipped public WebObject(String url_string, int connection_timeout, int read_timeout, boolean redirects_allowed, String user_agent) throws java.net.MalformedURLException, java.io.IOException { // Open a URL connection java.net.URL url = new java.net.URL(url_string); java.net.URLConnection uconn = url.openConnection(); if (!(uconn instanceof java.net.HttpURLConnection)) { throw new java.lang.IllegalArgumentException("URL protocol must be HTTP"); } conn = (java.net.HttpURLConnection) uconn; conn.setConnectTimeout(connection_timeout); conn.setReadTimeout(read_timeout); conn.setInstanceFollowRedirects(redirects_allowed); conn.setRequestProperty("User-agent", user_agent); } public void executeConnection() throws IOException { try { is = conn.getInputStream(); //global var l = conn.getContentLength(); //global var } catch (Exception e) { //handling code skipped } } //getContentStream and getLength methods which just return'is' and 'l' are skipped Here is where the fun part began. I ran some profiling (using System.currentTimeMillis()) to find out what takes long ,and what doesnt. The call to this method takes only 200ms on avg public InputStream getWebPageAsStream(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; WebObject wobj = new WebObject(url, 10000, 10000, true, "Mozilla/5.0 " + "(Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729)"); wobj.executeConnection(); l = wobj.getContentLength(); // global variable return wobj.getContentStream(); //returns 'is' stream } 200ms is quite expected from a network operation, and i am fine with it. BUT when i parse the inputStream in any way(read it into string/use java XML parser/read it into another ByteArrayStream) the process takes over 1000ms! for example, this code takes 1000ms IF i pass the stream i got('is') above from getContentStream() directly to this method: public static Document convertToXML(InputStream is) throws ParserConfigurationException, IOException, SAXException { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); doc.getDocumentElement().normalize(); return doc; } this code too, takes around 920ms IF the initial InputStream 'is' is passed in(dont read into the code itself - it just extracts the data i need by directly counting the characters, which can be done thanks to the rigid api feed format): public static parsedBattlePage convertBattleToXMLWithoutDOM(InputStream is) throws IOException { // Point A BufferedReader br = new BufferedReader(new InputStreamReader(is)); LinkedList ll = new LinkedList(); String str = br.readLine(); while (str != null) { ll.add(str); str = br.readLine(); } if (((String) ll.get(1)).indexOf("error") != -1) { return new parsedBattlePage(null, null, true, -1); } //Point B Iterator it = ll.iterator(); it.next(); it.next(); it.next(); it.next(); String[][] hits_arr = new String[1000][4]; String t_str = (String) it.next(); String tmp = null; int j = 0; for (int i = 0; t_str.indexOf("time") != -1; i++) { hits_arr[i][0] = t_str.substring(12, t_str.length() - 11); tmp = (String) it.next(); hits_arr[i][1] = tmp.substring(14, tmp.length() - 9); tmp = (String) it.next(); hits_arr[i][2] = tmp.substring(15, tmp.length() - 10); tmp = (String) it.next(); hits_arr[i][3] = tmp.substring(18, tmp.length() - 13); it.next(); it.next(); t_str = (String) it.next(); j++; } String[] b_info_arr = new String[9]; int[] space_nums = {13, 10, 13, 11, 11, 12, 5, 10, 13}; for (int i = 0; i < space_nums.length; i++) { tmp = (String) it.next(); b_info_arr[i] = tmp.substring(space_nums[i] + 4, tmp.length() - space_nums[i] - 1); } //Point C return new parsedBattlePage(hits_arr, b_info_arr, false, j); } I have tried replacing the default BufferedReader with BufferedReader br = new BufferedReader(new InputStreamReader(is), 250000); This didnt change much. My second try was to replace the code between A and B with: Iterator it = IOUtils.lineIterator(is, "UTF-8"); Same result, except this time A-B was 0ms, and B-C was 1000ms, so then every call to it.next() must have been consuming some significant time.(IOUtils is from apache-commons-io library). And here is the culprit - the time taken to parse the stream to string, be it by an iterator or BufferedReader in ALL cases was about 1000ms, while the rest of the code took 0ms(e.g. irrelevant). This means that parsing the stream to LinkedList, or iterating over it, for some reason was eating up a lot of my system resources. question was - why? Is it just the way java is made...no...thats just stupid, so I did another experiment. In my main method I added after the getWebPageAsStream(): //Point A ba = new byte[l]; // 'l' comes from wobj.getContentLength above bytesRead = is.read(ba); //'is' is our URLConnection original InputStream offset = bytesRead; while (bytesRead != -1) { bytesRead = is.read(ba, offset - 1, l - offset); offset += bytesRead; } //Point B InputStream is2 = new ByteArrayInputStream(ba); //Now just working with 'is2' - the "copied" stream The InputStream-byte[] conversion took again 1000ms - this is the way many ppl suggested to read an InputStream, and stil it is slow. And guess what - the 2 parser methods above (convertToXML() and convertBattlePagetoXMLWithoutDOM(), when passed 'is2' instead of 'is' took, in all 4 cases, under 50ms to complete. I read a suggestion that the stream waits for connection to close before unblocking, so i tried using HttpComponentsClient 4.0 (http://hc.apache.org/httpcomponents-client/index.html) instead, but the initial InputStream took just as long to parse. e.g. this code: public InputStream getWebPageAsStream2(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(url); HttpParams p = new BasicHttpParams(); HttpConnectionParams.setSocketBufferSize(p, 250000); HttpConnectionParams.setStaleCheckingEnabled(p, false); HttpConnectionParams.setConnectionTimeout(p, 5000); httpget.setParams(p); HttpResponse response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); l = (int) entity.getContentLength(); return entity.getContent(); } took even longer to process(50ms more for just the network) and the stream parsing times remained the same. Obviously it can be instantiated so as to not create HttpClient and properties every time(faster network time), but the stream issue wont be affected by that. So we come to the center problem - why does the initial URLConnection InputStream(or HttpClient InputStream) take so long to process, while any stream of same size and content created locally is orders of magnitude faster? I mean, the initial response is already somewhere in RAM, and I cant see any good reasong why it is processed so slowly compared to when a same stream is just created from a byte[]. Considering I have to parse million of entries and thousands of pages like that, a total processing time of almost 1.5s/page seems WAY WAY too long. Any ideas? P.S. Please ask in any more code is required - the only thing I do after parsing is make a PreparedStatement and put the entries into JavaDB in packs of 1000+, and the perfomance is ok ~ 200ms/1000entries, prb could be optimized with more cache but I didnt look into it much.

    Read the article

  • Deploying .NET COM dll, getting error (0x80070002)

    - by Brett
    I have a .NET COM assembly I am attempting to deploy to a web server (IIS 6 Win 2003). We have successfully deployed this assembly to our test environment, but the production environment is not working. The assembly is being called from a classic ASP page. Every time that page tries to initialize the assembly with “Set LTMRender = CreateObject("LTMRender.Render")”, I get an error “Error Type:, (0x80070002)”. This error seems to indicate a permission denied, or file not found type problem. I created a test app to see if the assembly works outside of the web page. The .exe initializes the assembly, and then makes a call designed to fail which in turn causes the assembly to produce a log file. It works if I run the .exe in the same folder as the assembly, but fails if I run it elsewhere. For some reason, the assembly is not accessible from outside it’s folder. I can’t figure out why this won’t work. Things I have confirmed: The deployment folder has adequate permissions. We have confirmed that the folder the assembly in installed in has the correct permissions for all the necessary user accounts. The Assembly is signed with a strong name, and was registered with regasm.exe C:_WebSites\LTMRender\LTMRender.dll /codebase /tlb:C:_WebSites\LTMRender\LTMRender.tlb. Regasm reported success. The Assembly has the attribute and relevant GUID’s set correctly. Any tips? EDIT We ran filemon against my testapp.exe and it seems to have indicated what the problem is. When testapp.exe runs in D:_websites\DocWebV2\ or D:_websites\DocWebV2\ LTMRender\ folder, it succeeds and filemon is showing D:_websites\DocWebV2\LTMRender\pinPDF.dll SUCCESS If I run my testapp.exe in the D:_websites\DocWebV2\Client – where my asp pages run, it shows D:_websites\DocWebV2\pinPDF.dll NAME NOT FOUND and then D:_websites\DocWebV2\pinPDF\pinPDF.dll FILE NOT FOUND I’m not sure why it is not looking in the correct folder if it’s under this particular folder only.

    Read the article

  • jQuery - Loading content into div, styles not applied?

    - by Kenny Bones
    Hi! I'm trying to get this content loader to work and I've managed to get it to get new content, once the content is loaded it isn't styled correctly. Also the character "é" becomes a questionmark. Doctype problem? As well as the h2 tag normally having Cufon applied to it is not triggering. So basically, this content loader require me to have a bunch of pages being essentially the same, except for the content I want to retreice. This way, users can use the actual URL as you'd normally exect. Only when a link is clicked on an already loaded page, it's only the content from the #content div that's realle being replaced. I can post code here, but I think it's better to just watch it happen on the testpage. It's very low on graphics btw ;) http://www.matkalenderen.no Just click the blue text link and you'll see it. Also, the red button on the second loaded content is supposed to revert the content back to previous. But it's not being triggered or something. What's happening?

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >