Search Results

Search found 27144 results on 1086 pages for 'tail call optimization'.

Page 632/1086 | < Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >

  • Remove duplicates from DataTable and custom IEqualityComparer<DataRow>

    - by abatishchev
    How have I to implement IEqualityComparer<DataRow> to remove duplicates rows from a DataTable with next structure: ID primary key, col_1, col_2, col_3, col_4 The default comparer doesn't work because each row has it's own, unique primary key. How to implement IEqualityComparer<DataRow> that will skip primary key and compare only data remained. I have something like this: public class DataRowComparer : IEqualityComparer<DataRow> { public bool Equals(DataRow x, DataRow y) { return x.ItemArray.Except(new object[] { x[x.Table.PrimaryKey[0].ColumnName] }) == y.ItemArray.Except(new object[] { y[y.Table.PrimaryKey[0].ColumnName] }); } public int GetHashCode(DataRow obj) { return obj.ToString().GetHashCode(); } } and public static DataTable RemoveDuplicates(this DataTable table) { return (table.Rows.Count > 0) ? table.AsEnumerable().Distinct(new DataRowComparer()).CopyToDataTable() : table; } but it calls only GetHashCode() and doesn't call Equals()

    Read the article

  • How to rollback in EF4?

    - by Jaxidian
    In my research about rolling back transactions in EF4, it seems everybody refers to this blog post or offers a similar explanation. In my scenario, I'm wanting to do this in a unit testing scenario where I want to rollback practically everything I do within my unit testing context to keep from updating the data in the database (yeah, we'll increment counters but that's okay). In order to do this, is it best to follow the following plan? Am I missing some concept or anything else major with this (aside from my SetupMyTest and PerformMyTest functions won't really exist that way)? using (var ts = new TransactionScope()) { // Arrange SetupMyTest(context); // Act PerformMyTest(context); var numberOfChanges = context.SaveChanges(false); // if there's an issue, chances are that an exception has been thrown by now. // Assert Assert.IsTrue(numberOfChanges > 0, "Failed to _____"); // transaction will rollback because we do not ever call Complete on it }

    Read the article

  • ASP.NET MVC 3: Layouts and Sections with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: Introducing Razor (July 2nd) New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (Dec 16th) Layouts and Sections with Razor (Today) In today’s post I’m going to go into more details about how Layout pages work with Razor.  In particular, I’m going to cover how you can have multiple, non-contiguous, replaceable “sections” within a layout file – and enable views based on layouts to optionally “fill in” these different sections at runtime.  The Razor syntax for doing this is clean and concise. I’ll also show how you can dynamically check at runtime whether a particular layout section has been defined, and how you can provide alternate content (or even an alternate layout) in the event that a section isn’t specified within a view template.  This provides a powerful and easy way to customize the UI of your site and make it clean and DRY from an implementation perspective. What are Layouts? You typically want to maintain a consistent look and feel across all of the pages within your web-site/application.  ASP.NET 2.0 introduced the concept of “master pages” which helps enable this when using .aspx based pages or templates.  Razor also supports this concept with a feature called “layouts” – which allow you to define a common site template, and then inherit its look and feel across all the views/pages on your site. I previously discussed the basics of how layout files work with Razor in my ASP.NET MVC 3: Layouts with Razor blog post.  Today’s post will go deeper and discuss how you can define multiple, non-contiguous, replaceable regions within a layout file that you can then optionally “fill in” at runtime. Site Layout Scenario Let’s look at how we can implement a common site layout scenario with ASP.NET MVC 3 and Razor.  Specifically, we’ll implement some site UI where we have a common header and footer on all of our pages.  We’ll also add a “sidebar” section to the right of our common site layout.  On some pages we’ll customize the SideBar to contain content specific to the page it is included on: And on other pages (that do not have custom sidebar content) we will fall back and provide some “default content” to the sidebar: We’ll use ASP.NET MVC 3 and Razor to enable this customization in a nice, clean way.  Below are some step-by-step tutorial instructions on how to build the above site with ASP.NET MVC 3 and Razor. Part 1: Create a New Project with a Layout for the “Body” section We’ll begin by using the “File->New Project” menu command within Visual Studio to create a new ASP.NET MVC 3 Project.  We’ll create the new project using the “Empty” template option: This will create a new project that has no default controllers in it: Creating a HomeController We will then right-click on the “Controllers” folder of our newly created project and choose the “Add->Controller” context menu command.  This will bring up the “Add Controller” dialog: We’ll name the new controller we create “HomeController”.  When we click the “Add” button Visual Studio will add a HomeController class to our project with a default “Index” action method that returns a view: We won’t need to write any Controller logic to implement this sample – so we’ll leave the default code as-is.  Creating a View Template Our next step will be to implement the view template associated with the HomeController’s Index action method.  To implement the view template, we will right-click within the “HomeController.Index()” method and select the “Add View” command to create a view template for our home page: This will bring up the “Add View” dialog within Visual Studio.  We do not need to change any of the default settings within the above dialog (the name of the template was auto-populated to Index because we invoked the “Add View” context menu command within the Index method).  When we click the “Add” Button within the dialog, a Razor-based “Index.cshtml” view template will be added to the \Views\Home\ folder within our project.  Let’s add some simple default static content to it: Notice above how we don’t have an <html> or <body> section defined within our view template.  This is because we are going to rely on a layout template to supply these elements and use it to define the common site layout and structure for our site (ensuring that it is consistent across all pages and URLs within the site).  Customizing our Layout File Let’s open and customize the default “_Layout.cshtml” file that was automatically added to the \Views\Shared folder when we created our new project: The default layout file (shown above) is pretty basic and simply outputs a title (if specified in either the Controller or the View template) and adds links to a stylesheet and jQuery.  The call to “RenderBody()” indicates where the main body content of our Index.cshtml file will merged into the output sent back to the browser. Let’s modify the Layout template to add a common header, footer and sidebar to the site: We’ll then edit the “Site.css” file within the \Content folder of our project and add 4 CSS rules to it: And now when we run the project and browse to the home “/” URL of our project we’ll see a page like below: Notice how the content of the HomeController’s Index view template and the site’s Shared Layout template have been merged together into a single HTML response.  Below is what the HTML sent back from the server looks like: Part 2: Adding a “SideBar” Section Our site so far has a layout template that has only one “section” in it – what we call the main “body” section of the response.  Razor also supports the ability to add additional "named sections” to layout templates as well.  These sections can be defined anywhere in the layout file (including within the <head> section of the HTML), and allow you to output dynamic content to multiple, non-contiguous, regions of the final response. Defining the “SideBar” section in our Layout Let’s update our Layout template to define an additional “SideBar” section of content that will be rendered within the <div id=”sidebar”> region of our HTML.  We can do this by calling the RenderSection(string sectionName, bool required) helper method within our Layout.cshtml file like below:   The first parameter to the “RenderSection()” helper method specifies the name of the section we want to render at that location in the layout template.  The second parameter is optional, and allows us to define whether the section we are rendering is required or not.  If a section is “required”, then Razor will throw an error at runtime if that section is not implemented within a view template that is based on the layout file (which can make it easier to track down content errors).  If a section is not required, then its presence within a view template is optional, and the above RenderSection() code will render nothing at runtime if it isn’t defined. Now that we’ve made the above change to our layout file, let’s hit refresh in our browser and see what our Home page now looks like: Notice how we currently have no content within our SideBar <div> – that is because the Index.cshtml view template doesn’t implement our new “SideBar” section yet. Implementing the “SideBar” Section in our View Template Let’s change our home-page so that it has a SideBar section that outputs some custom content.  We can do that by opening up the Index.cshtml view template, and by adding a new “SiderBar” section to it.  We’ll do this using Razor’s @section SectionName { } syntax: We could have put our SideBar @section declaration anywhere within the view template.  I think it looks cleaner when defined at the top or bottom of the file – but that is simply personal preference.  You can include any content or code you want within @section declarations.  Notice above how I have a C# code nugget that outputs the current time at the bottom of the SideBar section.  I could have also written code that used ASP.NET MVC’s HTML/AJAX helper methods and/or accessed any strongly-typed model objects passed to the Index.cshtml view template. Now that we’ve made the above template changes, when we hit refresh in our browser again we’ll see that our SideBar content – that is specific to the Home Page of our site – is now included in the page response sent back from the server: The SideBar section content has been merged into the proper location of the HTML response : Part 3: Conditionally Detecting if a Layout Section Has Been Implemented Razor provides the ability for you to conditionally check (from within a layout file) whether a section has been defined within a view template, and enables you to output an alternative response in the event that the section has not been defined.  This provides a convenient way to specify default UI for optional layout sections.  Let’s modify our Layout file to take advantage of this capability.  Below we are conditionally checking whether the “SideBar” section has been defined without the view template being rendered (using the IsSectionDefined() method), and if so we render the section.  If the section has not been defined, then we now instead render some default content for the SideBar:  Note: You want to make sure you prefix calls to the RenderSection() helper method with a @ character – which will tell Razor to execute the HelperResult it returns and merge in the section content in the appropriate place of the output.  Notice how we wrote @RenderSection(“SideBar”) above instead of just RenderSection(“SideBar”).  Otherwise you’ll get an error. Above we are simply rendering an inline static string (<p>Default SideBar Content</p>) if the section is not defined.  A real-world site would more likely refactor this default content to be stored within a separate partial template (which we’d render using the Html.RenderPartial() helper method within the else block) or alternatively use the Html.Action() helper method within the else block to encapsulate both the logic and rendering of the default sidebar. When we hit refresh on our home-page, we will still see the same custom SideBar content we had before.  This is because we implemented the SideBar section within our Index.cshtml view template (and so our Layout rendered it): Let’s now implement a “/Home/About” URL for our site by adding a new “About” action method to our HomeController: The About() action method above simply renders a view back to the client when invoked.  We can implement the corresponding view template for this action by right-clicking within the “About()” method and using the “Add View” menu command (like before) to create a new About.cshtml view template.  We’ll implement the About.cshtml view template like below. Notice that we are not defining a “SideBar” section within it: When we browse the /Home/About URL we’ll see the content we supplied above in the main body section of our response, and the default SideBar content will rendered: The layout file determined at runtime that a custom SideBar section wasn’t present in the About.cshtml view template, and instead rendered the default sidebar content. One Last Tweak… Let’s suppose that at a later point we decide that instead of rendering default side-bar content, we just want to hide the side-bar entirely from pages that don’t have any custom sidebar content defined.  We could implement this change simply by making a small modification to our layout so that the sidebar content (and its surrounding HTML chrome) is only rendered if the SideBar section is defined.  The code to do this is below: Razor is flexible enough so that we can make changes like this and not have to modify any of our view templates (nor make change any Controller logic changes) to accommodate this.  We can instead make just this one modification to our Layout file and the rest happens cleanly.  This type of flexibility makes Razor incredibly powerful and productive. Summary Razor’s layout capability enables you to define a common site template, and then inherit its look and feel across all the views/pages on your site. Razor enables you to define multiple, non-contiguous, “sections” within layout templates that can be “filled-in” by view templates.  The @section {} syntax for doing this is clean and concise.  Razor also supports the ability to dynamically check at runtime whether a particular section has been defined, and to provide alternate content (or even an alternate layout) in the event that it isn’t specified.  This provides a powerful and easy way to customize the UI of your site - and make it clean and DRY from an implementation perspective. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Running JUnit test classes from another JUnit test class

    - by Dr. Monkey
    I have two classes that I am testing (let's call them ClassA and ClassB). Each has its own JUnit test class (testClassA and testClassB respectively). ClassA relies on ClassB for its normal functioning, so I want to make sure ClassB passes its tests before running testClassA (otherwise the results from testClassA would be meaningless). What is the best way to do this? In this case it is for an assignment so I need to keep it to the two specified test classes if possible. Can/should I throw an exception from testClassA if testClassB's tests aren't all passed? This would require testClassB to run invisibly and just report its success/failure to testClassA, rather than to the GUI (via JUnit). I am using Eclipse and JUnit 4.8.1

    Read the article

  • ASP.NET MVC Pass mutiple params from getJson to controller

    - by andyJ
    Hi, I am making a call to a controller action in javascript using the getJson method. I need to pass two parameters to my action method on the controller, but I am struggling to do so. I do not fully understand the routing tables and not sure if this is what I need to use to get this working. Please see example below of what I am trying to do. var action = "<%=Url.Content('~/Postcode/GetAddressResults/')%>" + $get("Premise").value + "/" + $get("SearchPostcode").value $.getJSON(action, null, function(data) { $("#AddressDropDown").fillSelect(data); }); This is my route which I don't understand how to make use of... routes.MapRoute( "postcode", "Postcode/GetAddressResults/{premise}/{postcode}", new { controller = "Motor", action = "GetAddressResults", premise = "", postcode = "" });

    Read the article

  • Why isn't my os.rename working?

    - by Alex
    Hi All, I'm trying to rename some files, but getting a baffling error*. When I run this: if os.path.isfile(fullPath): print 'fmf exists' print fullPath print newFilePath os.rename(fullPath,newFilePath) I get the following error: fmf exists (correct fullPath) (correct newFilePath, ie. destination) Traceback (most recent call last): File "whatever.py", line 374, in ? os.rename(fullPath,newFilePath) OSError: [Errno 2] No such file or directory Since I know that the file at fullPath exists, I'm baffled by the error. Of course, newFilePath doesn't exist, because that would be dumb. Any hints? Thanks! Alex *Aren't they all?

    Read the article

  • Issue with Date validation from actionscript Flex 4

    - by Tam
    I have a DateValidator as follows: <mx:DateValidator id="stringDateValidator" property="text" required="true" inputFormat="dd-mm-yyyy" allowedFormatChars="*#~/-" /> I would like to call the validator manually from actionscript: var valErrEvent:ValidationResultEvent = stringDateValidator.validate(wholeDate); if(valErrEvent.results.length > 0){ ...... But I'm getting the following exception: ReferenceError: Error #1069: Property month not found on spark.components.TextInput and there is no default value. at mx.validators::DateValidator$/validateDate()[E:\dev\gumbo_beta2\frameworks\projects\framework\src\mx\validators\DateValidator.as:203] at mx.validators::DateValidator/doValidation()[E:\dev\gumbo_beta2\frameworks\projects\framework\src\mx\validators\DateValidator.as:1404] at mx.validators::Validator/processValidation()[E:\dev\gumbo_beta2\frameworks\projects\framework\src\mx\validators\Validator.as:1012] at mx.validators::Validator/validate()[E:\dev\gumbo_beta2\frameworks\projects\framework\src\mx\validators\Validator.as:945] if I let the validator triggers automatically it works. You know how I can make that work? or do you have better ideas for validating dates using ActionScript instead of using the MX Validator.

    Read the article

  • Calling notifyDataSetChanged doesn't fire onContentChanged event of SimpleCursorAdapter

    - by Pentium10
    I have this scenario onResume of an activity: @Override protected void onResume() { if (adapter1!=null) adapter1.notifyDataSetChanged(); if (adapter2!=null) adapter2.notifyDataSetChanged(); if (adapter3!=null) adapter3.notifyDataSetChanged(); super.onResume(); } Adapter has been defined as: public class ListCursorAdapter extends SimpleCursorAdapter { Cursor c; /* (non-Javadoc) * @see android.widget.CursorAdapter#onContentChanged() */ @Override protected void onContentChanged() { // this is not called if (c!=null) c.requery(); super.onContentChanged(); } } And the onContentChanged event is not fired, although the onResume and the call to the adapter is issued. What is wrong?

    Read the article

  • JQuery PrettyPhoto/Lightbox with Loupe Magnifier

    - by thebluefox
    Morning gang, Right, I'm trying to crowbar a magnifier, like this one, into prettyPhoto (picked because the JS is nice). The trouble I'm having is initiating the loupe function when the prettyPhoto has loaded. If I include it in the prettyPhoto JS, it just gets itself into an endless loop, or doesn't get called at all. I've nearly got it working by putting a link next to the close button that calls the function inline, like so; <a href="#" onclick="$(\'.TB_Image\').loupe(); return false;">Magnify</a> The only problem here is that the return false doesn't work? It works when the call to loupe isn't in the onclick event though? Doing it this way does run the loupe function, but the homepage gets loaded so I don't know if it actually works or not. Has anyone got any sugestions at all? All help much appreciated!

    Read the article

  • AS3 OOP MVC with PHP

    - by Pepe Sanchez
    Im new to ActionScript and Flex 3... im trying to develop an MVC 100% OOP application with Flex 3 using MXML,AS3 and PHP. M (PHP) V (MXML) C (AS3) The 3 layers i choose for my development. I have 10 AS3 classes that are object related between them and some inherit or implement interfaces. The only problem here is how to interact 100% OOP with my model. In this case my model has to be a PHP Class that needs to be called from AS3 (the controller). For example the AS3 class : Patient have a method called Save: public function Save(data:Array) : void { /* PHP call - model layer */ } I want to create an instance of my PHP Patient Model class that connects to the DB and insert the data array into it. What should i use ? how can i also return a variable to AS3 ? What happen to as3 if there is a cached Exception in PHP ? Thanks in advanced :)

    Read the article

  • Need help with Swift mailer with Kohana wrapper

    - by alex
    My current code is this $swift = email::connect(); $swift->setSubject('hello') ->setFrom(array('[email protected]' => 'Alex')) ->setTo(array('[email protected]' => 'Alex')) ->setBody('hello') ->attach(Swift_Attachment::fromPath(DOCROOT . 'assets/attachments/instructions.pdf')); $swift->send(); The email::connect() returns an instance of SwiftMailer. As per these docs, it would seem that it should work. However, I get an error Fatal error: Call to undefined method Swift_Mailer::setSubject() in /home/user/public_html/application/classes/controller/properties.php on line 45 I've seen that email::connect() does exactly what the example code in the docs does. That is include the correct file return an instance of the library What am I doing wrong? Thanks

    Read the article

  • How to send AT command in android?

    - by jitendra Kumar
    Hi All, I want to send At command throug my application to modem. Can some one please let me know how to send AT command thr my application?? Do we need Phone object to send AT command??? The ATResponseParser class parses part of the AT command syntax used to communicate with the mobile radio hardware in a mobile handset. This is, in fact, a command syntax very much like the AT command syntax used by modems, a standard described in the 3GPP document number TS 27.007 and related specifications. I ant to send below AT command to Mode. 6.5 Hangup call +CHUP Table 13a: +CHUP action command syntax Command Possible response(s) +CHUP +CHUP=? Please help me. Regards, Jitendra Kumar

    Read the article

  • WPF DataGrid : CanContentScroll property causing odd behavior

    - by Sonic Soul
    i have a solution where i generate a DataGrid (or multiple instances) based on user criteria.. each grid keeps receiving data as it comes in via ObservableCollection the problem i had, was that the scroll acted weird. It was choppy, and scrollbar would resize it self while scrolling. than i found.. CanContentScroll property! It completely fixes the weird scrolling behavior bringing me temporary bliss and happiness. however, it causes 2 unfortunate side effects. whenever i re-create grid instances and bind them to my observable collection, it freezes my entire window for 5 seconds. when my grid grows to a big size, this delay can last for 30 seconds. when i call TradeGrid.ScrollIntoView(TradeGrid.Items(TradeGrid.Items.Count - 1)) to scroll to the bottom, it jumps to bottom and than back to the top. is there another way to achieve smooth scrolling perhaps?

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • VB.NET DownloadDataAsync:

    - by Brett
    Hi everybody, I am having the worst trouble getting around a bug, and am hoping that I can get some advice on this site. In short, I am trying to make an asynchronous web service call from my VB.NET application. But my "client_DownloadDataCompleted" callback is NEVER being called when the download is complete. Here is my complete code: Public Sub BeginAsyncDownload(ByVal Url As String) Dim waiter As System.Threading.AutoResetEvent = New System.Threading.AutoResetEvent(False) Dim client As WebClient = New WebClient() 'client_DownloadDataCompleted method gets called when the download completes. AddHandler client.DownloadDataCompleted, AddressOf client_DownloadDataCompleted Dim uri As Uri = New Uri(Url) Downloading = True 'Class variable defined elsewhere client.DownloadDataAsync(uri, waiter) End Sub Private Sub client_DownloadDataCompleted(ByVal sender As Object, ByVal e As AsyncCompletedEventArgs) MessageBox.Show("Download Completed") Downloading = False Debug.Print("Downloaded") End Sub Again, the client_DownloadDataCompleted method is never being called. I have also tried using the method: Private Sub client_DownloadDataCompleted(ByVal sender As Object, ByVal e As DownloadDataCompletedEventArgs) With no luck. What I really need is that "Downloading" variable to be switched off when the download is complete. Thanks in advance! Brett

    Read the article

  • WF4 workflow versioning using WorkflowServiceHost

    - by rwwilden
    Related to this question. I understand how to implement versioning of workflows using WorkflowApplication. If you keep the original XAML definition for older versions of your workflow around, you can load them using the right WorkflowApplication constructor. How could you ensure that WorkflowServiceHost uses the correct workflow definition when you want to host your workflows in IIS? There is a WorkflowServiceHost constructor that you can use to load a workflow definition, but when you are hosting inside IIS through a XAMLX file, you do not call WorkflowServiceHost yourself, this is handled somehow by IIS. So how do I ensure that the correct workflow definition is loaded for the right version of my workflow?

    Read the article

  • AnkhSVN client side pre-commit hook

    - by santa
    Basically I want to do the same thing as the fella over there. It seems that everybody was thinking about server-side hooks (with all their evil potential). I want a client side script be run before commit so astyle can format the code the way my boss likes to see it. Since my IDE (VS2010Pro) automatically checks when a file changed on the disk an opts me in for reloading it, there is no real evil with all that. Is there any (clean) way to accomplish that with AnkhSVN? Maybe there's also a way to extend VisualStudio to call my pre-commit-script...

    Read the article

  • get contact info from android contact picker

    - by ng93
    hi im trying to call the contact picker, get the persons name, phone and e-mail into strings and send them to another activity using an intent. So far this works Intent intent = new Intent(Intent.ACTION_PICK, ContactsContract.Contacts.CONTENT_URI); startActivityForResult(intent, 1); @Override public void onActivityResult(int reqCode, int resultCode, Intent data) { super.onActivityResult(reqCode, resultCode, data); if (resultCode == Activity.RESULT_OK) { Uri contactData = data.getData(); Cursor c = managedQuery(contactData, null, null, null, null); if (c.moveToFirst()) { String name = c.getString(c.getColumnIndexOrThrow(ContactsContract.Contacts.DISPLAY_NAME)); Intent intent = new Intent(CurrentActivity.this, NewActivity.class); intent.putExtra("name", name); startActivityForResult(intent, 0); } } } but if i add in: String number = c.getString(c.getColumnIndexOrThrow(ContactsContract.CommonDataKinds.Phone.NUMBER)); it force closes maybe theres another way to get their number? thanks for help, ng93

    Read the article

  • XSS attack to bypass htmlspecialchars() function in value attribute

    - by Setzer
    Let's say we have this form, and the possible part for a user to inject malicious code is this below ... <input type=text name=username value=<?php echo htmlspecialchars($_POST['username']); ? ... We can't simply put a tag, or a javascript:alert(); call, because value will be interpreted as a string, and htmlspecialchars filters out the <,,',", so We can't close off the value with quotations. We can use String.fromCode(.....) to get around the quotes, but I still unable to get a simple alert box to pop up. Any ideas?

    Read the article

  • Load ActiveX DLL in Internet Explorer with elevated privileges

    - by adum
    I have an ActiveX control that I'm loading with JavaScript in Internet Explorer. It needs to run as medium integrity under UAC in Vista and Win7. This is written in C/C++, compiled in Visual Studio. One way to elevate privileges is to create a broker process that can request a medium integrity level. However, for this project, this is not a practical solution. I really need the ActiveX control itself to run elevated. My question is: what's the easiest way to do this? Can I change the build options on the project to be an exe, and use the COM interprocess connect system to automatically handle the communications, or do I need to be more sophisticated? Do I need to do anything complicated like manually call CreateProcess and make some kind of broker, or can it just work as an ActiveX exe that elevates itself?

    Read the article

  • Textbox OnTextChanged and Button Click Event Not Firing

    - by goodwince
    I have textboxes being generated by a repeater that use OnTextChanged with autopostback enabled so that I can know when when the values change. This all works perfect. The problem starts when I try to click on any buttons on the page. The loss of focus triggers the event for the OnTextChanged; however, the event for my buttons never get fired. I checked this in the debugger and while debugging if I put a break-point in the page_load it will call both; however, without the break-point it still only calls the OnTextChanged event. I found this post on JavaScript. If my problem is also JavaScript related, why does the clicking of the button fire in debug mode? Thanks.

    Read the article

  • Getting Permissions error while using stream.publish method of facebook API

    - by tek3
    Hi I want to publsh a post on user's wall but am getting this "permission error" error code 200 when trying to use stream.publish method of facebook api...i hve requested for extended permissions as: http://m.facebook.com/login.php?api%5Fkey="+API_KEY&....&req_perms=read_stream,publish_stream,offline_access but when i make call to the method stream.publish i am getting this permission error..it seems that req_perms in above url is simply getting ignored.. i am passing "method(stream.publish)","api_key","message","session_key","v","sig" as parametres to url http://api.facebook.com/restserver.php? will be greatful if anyone helps meout in this problem or provide me with proper steps for publishing a post on user's wall...the application is being developed on blackbery platform..

    Read the article

  • IoC with AutoMapper Profile using Autofac

    - by DownChapel
    I have been using AutoMapper for some time now. I have a profile setup like so: public class ViewModelAutoMapperConfiguration : Profile { protected override string ProfileName { get { return "ViewModel"; } } protected override void Configure() { AddFormatter<HtmlEncoderFormatter>(); CreateMap<IUser, UserViewModel>(); } } I add this to the mapper using the following call: Mapper.Initialize(x => x.AddProfile<ViewModelAutoMapperConfiguration>()); However, I now want to pass a dependency into the ViewModelAutoMapperConfiguration constructor using IoC. I am using Autofac. I have been reading through the article here: http://www.lostechies.com/blogs/jimmy_bogard/archive/2009/05/11/automapper-and-ioc.aspx but I can't see how this would work with Profiles. Any ideas? Thanks

    Read the article

  • jquery ajax method always returning an error?

    - by General_9
    I have the following ajax call and it always hits the error callback function every time it is called. The code in the handler is still run after the error but the success callback is never executed. What have I got wrong? $.ajax({ type: "POST", url: "Handlers/TheHandler.ashx", data: { control1: $('[id*=control1]').val(), control2: $('[id*=control2]').val(), control3: $('[id*=control3]').val(), control4: $('#control4').val(), control5: $('[id*=control5]').val(), control6: $('[id*=control6]').val() }, error: function (jqXHR, textStatus, errorThrown) { alert(jqXHR.readyState); alert(textStatus); alert(errorThrown); }, success: function (returnedValue) { alert("Got Here"); alert(returnedValue); } });

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

< Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >