Search Results

Search found 47383 results on 1896 pages for 'version control migration'.

Page 1203/1896 | < Previous Page | 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210  | Next Page >

  • Is the single <form runat="server">-element requirement really necessary for ASP.NET WebForms?

    - by michielvoo
    Looking at some of the changes coming to WebForms in ASP.NET 4.0 I can see many improvements that give developers even more control over the output. Some of these improvement have been a long time coming, and for some time it seemed that it wasn't even possible. It made me wonder if the current model with the single form element that runs on the server is really the only possible way. Why couldn't the ASPNET WebForm architecture work with multiple forms that all run on the server? Imagine if you could architect this change. How would it impact the way we write codebehind today? Would it introduce extra complexity? Would it change the way event handlers work, or validation, or ASP.NET Ajax with the ScriptManager and UpdatePanel controls?

    Read the article

  • ASP.NET - Web Application, UserControls and NullReferenceExceptions

    - by Echilon
    I have a web application, which works fine if I include my user controls with <%@ Register TagPrefix="mine" TagName="MyUC1" Src="~/UserControls/MyUc1.ascx" %> <%@ Register TagPrefix="mine" TagName="MyUC2" Src="~/UserControls/MyUc2.ascx" %> But I need to use the namespace due to needing to integrate with Umbraco. When I replace the register declaration with: <%@ Register TagPrefix="mine" Namespace="MyAssembly.UserControls" Assembly="MyAssembly"%> I get a null reference exception in the UserControl's Page_Load event (which references an ASP.NET control which is used by the UserControl itself. I find this pretty bizarre, but I've found very little information on how to fix it.

    Read the article

  • How to check if datetime is older than 20 seconds.

    - by Jelle
    Hello! This is my first time here so I hope I post this question at the right place. :) I need to build flood control for my script but I'm not good at all this datetime to time conversions with UTC and stuff. I hope you can help me out. I'm using the Google App Engine with Python. I've got a datetimeproperty at the DataStore database which should be checked if it's older than 20 seconds, then proceed. Could anybody help me out? So in semi-psuedo: q = db.GqlQuery("SELECT * FROM Kudo WHERE fromuser = :1", user) lastplus = q.get() if lastplus.date is older than 20 seconds: print"Go!"

    Read the article

  • Create and Backup Multiple Profiles in Google Chrome

    - by Asian Angel
    Other browsers such as Firefox and SeaMonkey allow you to have multiple profiles but not Chrome…at least not until now. If you want to use multiple profiles and create backups for them then join us as we look at Google Chrome Backup. Note: There is a paid version of this program available but we used the free version for our article. Google Chrome Backup in Action During the installation process you will run across this particular window. It will have a default user name filled in as shown here…you will not need to do anything except click on Next to continue installing the program. When you start the program for the first time this is what you will see. Your default Chrome Profile will already be visible in the window. A quick look at the Profile Menu… In the Tools Menu you can go ahead and disable the Start program at Windows Startup setting…the only time that you will need the program running is if you are creating or restoring a profile. When you create a new profile the process will start with this window. You can access an Advanced Options mode if desired but most likely you will not need it. Here is a look at the Advanced Options mode. It is mainly focused on adding Switches to the new Chrome Shortcut. The drop-down menu for the Switches available… To create your new profile you will need to choose: A profile location A profile name (as you type/create the profile name it will automatically be added to the Profile Path) Make certain that the Create a new shortcut to access new profile option is checked For our example we decided to try out the Disable plugins switch option… Click OK to create the new profile. Once you have created your new profile, you will find a new shortcut on the Desktop. Notice that the shortcut’s name will be Google Chrome + profile name that you chose. Note: On our system we were able to move the new shortcut to the “Start Menu” without problems. Clicking on our new profile’s shortcut opened up a fresh and clean looking instance of Chrome. Just out of curiosity we did decide to check the shortcut to see if the Switch set up correctly. Unfortunately it did not in this instance…so your mileage with the Switches may vary. This was just a minor quirk and nothing to get excited or upset over…especially considering that you can create multiple profiles so easily. After opening up our default profile of Chrome you can see the individual profile icons (New & Default in order) sitting in the Taskbar side-by-side. And our two profiles open at the same time on our Desktop… Backing Profiles Up For the next part of our tests we decided to create a backup for each of our profiles. Starting the wizard will allow you to choose between creating or restoring a profile. Note: To create or restore a backup click on Run Wizard. When you reach the second part of the process you can go with the Backup default profile option or choose a particular one from a drop-down list using the Select a profile to backup option. We chose to backup the Default Profile first… In the third part of the process you will need to select a location to save the profile to. Once you have selected the location you will see the Target Path as shown here. You can choose your own name for the backup file…we decided to go with the default name instead since it contained the backup’s calendar date. A very nice feature is the ability to have the cache cleared before creating the backup. We clicked on Yes…choose the option that best suits your needs. Once you have chosen either Yes or No the backup will then be created. Click Finish to complete the process. The backup file for our Default Profile at 14.0 MB in size. And the backup file for our Chrome Fresh Profile…2.81 MB. Restoring Profiles For the final part of our tests we decided to do a Restore. Select Restore and click Next to get the process started. In the second step you will need to browse for the Profile Backup File (and select the desired profile if you have created multiples). For our example we decided to overwrite the original Default Profile with the Chrome Fresh Profile. The third step lets you choose where to restore the chosen profile to…you can go with the Default Profile or choose one from the drop-down list using the Restore to a selected profile option. The final step will get you on your way to restoring the chosen profile. The program will conduct a check regarding the previous/old profile and ask if you would like to proceed with overwriting it. Definitely nice in case you change your mind at the last moment. Clicking Yes will finish the restoration. The only other odd quirk that we noticed while using the program was that the Next Button did not function after restoring the profile. You can easily get around the problem by clicking to close the window. Which one is which? After the restore process we had identical twins. Conclusion If you have been looking for a way to create multiple profiles in Google Chrome, then you might want to add this program to your system. Links Download Google Chrome Backup Similar Articles Productive Geek Tips Backup and Restore Firefox Profiles EasilyBackup Different Browsers Easily with FavBackupBackup Your Browser with the New FavBackupStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

  • C# UserControl factory

    - by user1112111
    Let's say you have two classes that extend UserControl. Each of the controls provides a custom event (this could be done by using an interface). You want to display one of the controls in the odd days and the other in the even days. You also want to be able to drag&drop (Visual Studio) the UserControl on your form without knowing what the Control type will finally be. How do you do that ? Is the factory pattern useful here ?

    Read the article

  • Why does asp.net mvc form submits itself on button clicks when javascript function error?

    - by melaos
    hi guys, i'm new to the asp.net mvc, and while working on this, i used very basic asp.net mvc stuff like beginform, etc. i used a lot of jquery codes this round for client side validation, ajax data retrieval, and other gui works. and i used a combinations of html inputs buttons, etc and the asp.net mvc type of controls. what i noticed is that whenever i click on a button control, which sometimes are tied to either jquery oclick events, when there's a javascript error, the page will just go on and submit. why is this happening and what am i missing here? my bad for the dumb questions.. thanks

    Read the article

  • Binding list of objects to WPF ListView

    - by Dave Colwell
    Hi all, I have a list of objects which i want to bind to a ListView control in my WPF application. The Objects have a DataTemplate already, so no need to define that. The list of objects is a property in the codebehind file in the format list<object> When i add one object programatically, it appears fine. But when i try to bind the ItemSource of the ListBox to the list of objects, nothing shows up. I am using the following binding: ItemsSource="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type Window}}, Path=Portfolios}" where the name of the property i am trying to bind to is Portfolios and exists on the parent window

    Read the article

  • Per-pixel displacement mapping GLSL

    - by Chris
    Im trying to implement a per-pixel displacement shader in GLSL. I read through several papers and "tutorials" I found and ended up with trying to implement the approach NVIDIA used in their Cascade Demo (http://www.slideshare.net/icastano/cascades-demo-secrets) starting at Slide 82. At the moment I am completly stuck with following problem: When I am far away the displacement seems to work. But as more I move closer to my surface, the texture gets bent in x-axis and somehow it looks like there is a little bent in general in one direction. EDIT: I added a video: click I added some screen to illustrate the problem: Well I tried lots of things already and I am starting to get a bit frustrated as my ideas run out. I added my full VS and FS code: VS: #version 400 layout(location = 0) in vec3 IN_VS_Position; layout(location = 1) in vec3 IN_VS_Normal; layout(location = 2) in vec2 IN_VS_Texcoord; layout(location = 3) in vec3 IN_VS_Tangent; layout(location = 4) in vec3 IN_VS_BiTangent; uniform vec3 uLightPos; uniform vec3 uCameraDirection; uniform mat4 uViewProjection; uniform mat4 uModel; uniform mat4 uView; uniform mat3 uNormalMatrix; out vec2 IN_FS_Texcoord; out vec3 IN_FS_CameraDir_Tangent; out vec3 IN_FS_LightDir_Tangent; void main( void ) { IN_FS_Texcoord = IN_VS_Texcoord; vec4 posObject = uModel * vec4(IN_VS_Position, 1.0); vec3 normalObject = (uModel * vec4(IN_VS_Normal, 0.0)).xyz; vec3 tangentObject = (uModel * vec4(IN_VS_Tangent, 0.0)).xyz; //vec3 binormalObject = (uModel * vec4(IN_VS_BiTangent, 0.0)).xyz; vec3 binormalObject = normalize(cross(tangentObject, normalObject)); // uCameraDirection is the camera position, just bad named vec3 fvViewDirection = normalize( uCameraDirection - posObject.xyz); vec3 fvLightDirection = normalize( uLightPos.xyz - posObject.xyz ); IN_FS_CameraDir_Tangent.x = dot( tangentObject, fvViewDirection ); IN_FS_CameraDir_Tangent.y = dot( binormalObject, fvViewDirection ); IN_FS_CameraDir_Tangent.z = dot( normalObject, fvViewDirection ); IN_FS_LightDir_Tangent.x = dot( tangentObject, fvLightDirection ); IN_FS_LightDir_Tangent.y = dot( binormalObject, fvLightDirection ); IN_FS_LightDir_Tangent.z = dot( normalObject, fvLightDirection ); gl_Position = (uViewProjection*uModel) * vec4(IN_VS_Position, 1.0); } The VS just builds the TBN matrix, from incoming normal, tangent and binormal in world space. Calculates the light and eye direction in worldspace. And finally transforms the light and eye direction into tangent space. FS: #version 400 // uniforms uniform Light { vec4 fvDiffuse; vec4 fvAmbient; vec4 fvSpecular; }; uniform Material { vec4 diffuse; vec4 ambient; vec4 specular; vec4 emissive; float fSpecularPower; float shininessStrength; }; uniform sampler2D colorSampler; uniform sampler2D normalMapSampler; uniform sampler2D heightMapSampler; in vec2 IN_FS_Texcoord; in vec3 IN_FS_CameraDir_Tangent; in vec3 IN_FS_LightDir_Tangent; out vec4 color; vec2 TraceRay(in float height, in vec2 coords, in vec3 dir, in float mipmap){ vec2 NewCoords = coords; vec2 dUV = - dir.xy * height * 0.08; float SearchHeight = 1.0; float prev_hits = 0.0; float hit_h = 0.0; for(int i=0;i<10;i++){ SearchHeight -= 0.1; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = coords + dUV * (1.0-hit_h) * 10.0f - dUV; vec2 Temp = NewCoords; SearchHeight = hit_h+0.1; float Start = SearchHeight; dUV *= 0.2; prev_hits = 0.0; hit_h = 0.0; for(int i=0;i<5;i++){ SearchHeight -= 0.02; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = Temp + dUV * (Start - hit_h) * 50.0f; return NewCoords; } void main( void ) { vec3 fvLightDirection = normalize( IN_FS_LightDir_Tangent ); vec3 fvViewDirection = normalize( IN_FS_CameraDir_Tangent ); float mipmap = 0; vec2 NewCoord = TraceRay(0.1,IN_FS_Texcoord,fvViewDirection,mipmap); //vec2 ddx = dFdx(NewCoord); //vec2 ddy = dFdy(NewCoord); vec3 BumpMapNormal = textureLod(normalMapSampler, NewCoord.xy, mipmap).xyz; BumpMapNormal = normalize(2.0 * BumpMapNormal - vec3(1.0, 1.0, 1.0)); vec3 fvNormal = BumpMapNormal; float fNDotL = dot( fvNormal, fvLightDirection ); vec3 fvReflection = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection ); float fRDotV = max( 0.0, dot( fvReflection, fvViewDirection ) ); vec4 fvBaseColor = textureLod( colorSampler, NewCoord.xy,mipmap); vec4 fvTotalAmbient = fvAmbient * fvBaseColor; vec4 fvTotalDiffuse = fvDiffuse * fNDotL * fvBaseColor; vec4 fvTotalSpecular = fvSpecular * ( pow( fRDotV, fSpecularPower ) ); color = ( fvTotalAmbient + (fvTotalDiffuse + fvTotalSpecular) ); } The FS implements the displacement technique in TraceRay method, while always using mipmap level 0. Most of the code is from NVIDIA sample and another paper I found on the web, so I guess there cannot be much wrong in here. At the end it uses the modified UV coords for getting the displaced normal from the normal map and the color from the color map. I looking forward for some ideas. Thanks in advance! Edit: Here is the code loading the heightmap: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mImageData); glGenerateMipmap(GL_TEXTURE_2D); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); Maybe something wrong in here?

    Read the article

  • Is wordpress a good platform for webapp development?

    - by darlinton
    I am planning a webapp to control bank paying orders. In a quick review, the user goes online and creates a payment order. This order goes to other people that pays it and register the payment on the system. The system keeps track of all the payments, keeping the account balance up-to-date. The system needs a login system, bank integration, and to support at some point thousands of clients. We can find articles on the web about the benefits of using wordpress platform to build webapps. However, I could not find discussion with counterarguments to user wordpress. As the platform the most important choice in webapp project, I would to know more about the pitfalls and harms for choosing wordpress. The question is: What are the benefits and harms for choosing wordpress as a development platform for a webapp that need to be integrated with other system (backend systems) and to handle thousands of users (does it scale up?)?

    Read the article

  • HOW TO SElect line number in TextBox Multiline

    - by Alhambra Eidos
    Hi all, I have large text in System.Windows.Forms.TextBox control in my form (winforms), vs 2008. I want find a text, and select the line number where I've found that text. Sample, I have fat big text, and I find "ERROR en línea", and I want select the line number in textbox multiline. string textoLogDeFuenteSQL = @"SQL*Plus: Release 10.1.0.4.2 - Production on Mar Jun 1 14:35:43 2010 Copyright (c) 1982, 2005, Oracle. All rights reserved. ** MORE TEXT **** Conectado a: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, Data Mining and Real Application Testing options WHERE LAVECODIGO = 'CO_PREANUL' * ERROR en línea 2: ORA-00904: ""LAVECODIGO"": identificador no v?lido INSERT INTO COM_CODIGOS * ERROR en línea 1: ORA-00001: restricción única (XACO.INX_COM_CODIGOS_PK) violada"; ** MORE TEXT **** Any sample code about it ? thanks in advanced,

    Read the article

  • how to check only one item in checkedlistbox

    - by Shashi Jaiswal
    I have check list box control and i want to select only one item at a time and i am currently using this code to do the same. private void CLSTVariable_ItemCheck(object sender, ItemCheckEventArgs e) { // Local variable int ListIndex; CLSTVariable.ItemCheck -= CLSTVariable_ItemCheck; for (ListIndex = 0; ListIndex < CLSTVariable.Items.Count; ListIndex++) { // Unchecked all items that is not currently selected if (CLSTVariable.SelectedIndex != ListIndex) { // set item as unchecked CLSTVariable.SetItemChecked(ListIndex, false); } // if else { // set selected item as checked CLSTVariable.SetItemChecked(ListIndex, true); } } // for CLSTVariable.ItemCheck += CLSTVariable_ItemCheck; } this code is working fine. but problem is that when i click again and again on selected item then that selected item should not be unchecked, means at least one item should be checked always...

    Read the article

  • What CPAN module can send all warnings and errors to a log file?

    - by mithaldu
    I'm maintaining some website code that currently dumps all errors and warnings into the apache log. This is a problem for me as i cannot access that due to lack of root. As such I am looking to redirect all warnings and errors to a specified log file under my control. I'd like to do so without preventing those messages from going through their usual patch of execution. Now, before i spend a lot of time fiddling with the Perl internals and possibly breaking things unawares I thought I'd look for a CPAN module that does this. However, I either do not know how to properly search for this, or I am overlooking something and thus cannot find any module that seems suitable. Thus my asking here: What CPAN module would i use for this task?

    Read the article

  • Setting Mercurial's execute bit on Windows

    - by Joe
    I work on a Mercurial repository that is checked out onto an Unix filesystem such as ext3 on some machines, and FAT32 on others. In Subversion, I can set the svn:executable property to control whether a file should be marked executable when checked out on a platform that supports such a bit. I can do this regardless of the platform I'm running SVN on or the filesystem containing my working copy. In Mercurial, I can chmod +x to get the same effect if the clone is on a Unix filesystem. But how can I set (or remove) the executable bit on a file on a FAT filesystem?

    Read the article

  • How to tune ASP.NET CreateUserWizard?

    - by Max
    I have created ASP.NET WebForms site on IIS 7.5. I want to create step by step user registration. I want to store the basic and detailed information about registered users in a specially created database table (not in aspnet_users table). I want to validate email first and then prevent next registration step for the user whose email address already exists in the database. At the last registration step I want to present summary form. All previous input and select fields should be duplicated in this form with "disabled" attribute. Please tell me how to adjust CreateUserWizard ASP.NET Control and web.config file to these needs?

    Read the article

  • C code autocomplete in Eclipse

    - by Ittai
    Hi, I'm a Java developer and I've downloaded the Eclipse for C (course purposes) and to my amazement the control+space shortcut (for autocomplete) did not work. I've created a new project and a new class using the wizzards and started to type "print" and then tried to find an autocomplete feature. After a bit of googling I arrived at C/C++-Editor-Content Assist-Advanced and there I verified that Help proposals,Parsing-based proposals and Template proposals options were checked. I then went over to the Keys preferences page using the link at that page and entered a binding for all relevant content assist from before C\C++ Content Assist (type...) and chose in the When box the C\C++ Editor option. But alas no autocompletion was offered. Can someone please point me to the right direction? Thanks, Ittai

    Read the article

  • Problem with greek characters using java

    - by Subhendu Mahanta
    I am trying to write greek characters to a file using java like this: String greek = "\u03c1\u03ae\u03bc. \u03c7\u03b1\u03b9\u03c1\u03b5\u03c4\u03ce"; try { BufferedWriter out = new BufferedWriter(new FileWriter("E:\\properties\\outfilename.txt")); out.write(greek); out.close(); } catch (IOException e) { } Not working. Tried to use javac -encoding ISO-8859-7. Also tried java -Dfile.encoding=ISO-8859-7. Assuming that as I do not have greek font in my pc, I downloaded achillies (greek font - Ach4.ttf).Installed it by going to control panel fonts. Any ideas?

    Read the article

  • Refactoring a C# derived class with method dependancies

    - by drelihan
    Hi Folks, I want to get your opinion on this. I have a class which is derived from a base class. I don't have control over the code in the base class and it is critical to the system that I derive from it. In my class I inherite two methods that are critical to the system and are used in pretty much every function, many times. I intend to refactor this derived class and extract some classes from it - this won't be a problem. What I'm not sure about is, is it worth extracting class if I have to constantly make call backs to my main class to access the two methods (or public wrappers to the methods)??? Thanks

    Read the article

  • How can I make a DateTimePicker display an empty string?

    - by brass-kazoo
    I would like to be able to display a DateTimePicker that has a default value of nothing, i.e. no date. For example, I have a start date dtTaskStart and an end date dtTaskEnd for a task, but the end date is not known, and not populated initially. I have specified a custom format of yyyy-MM-dd for both controls. Setting the value to null, or an empty string at runtime causes an error, so how can I accomplish this? I have considered using a checkbox to control the enabling of this field, but there is still the issue of displaying an initial value.. Edit: Arguably a duplicate of the question DateTimePicker Null Value (.NET), but the solution I found for my problem is not a solution for that question, so I think it should remain here for others to find..

    Read the article

  • Perl TDS character sets

    - by skiphoppy
    I'm using the FreeTDS driver with DBD::Sybase, connecting to an MS SQL Server. When I query certain values of certain records, I get this error: DBD::Sybase::st fetchrow_arrayref failed: OpenClient message: LAYER = (0) ORIGIN = (0) SEVERITY = (9) NUMBER = (99) Server , database Message String: WARNING! Some character(s) could not be converted into client's character set. Unconverted bytes were changed to question marks ('?'). This seems to happen for records that contain special Windows character-set characters, such as curly quotes, copied and pasted from people's Outlook and Word messages. Unfortunately, I do not have any control of this database; sanitizing the input on the way in is obviously the way to go, but is not available to me. What FreeTDS settings do I need to change to be able to successfully query these records?

    Read the article

  • how to disable web page cache throughout the servlets

    - by Kurt
    To no-cache web page, in the java controller servlet, I did somthing like this in a method: public ModelAndView home(HttpServletRequest request, HttpServletResponse response) throws Exception { ModelAndView mav = new ModelAndView(ViewConstants.MV_MAIN_HOME); mav.addObject("testing", "Test this string"); mav.addObject(request); response.setHeader("Cache-Control", "no-cache, no-store"); response.setHeader("Pragma", "no-cache"); response.setDateHeader("Expires", 0); return mav; } But this only works for a particular response object. I have many similar methods in a servlet. And I have many servlets too. If I want to disable cache throughout the application, what should I do? (I do not want to add above code for every single response object) Thanks in advance.

    Read the article

  • Graphical Programming Language

    - by prosseek
    In control engineering or instrumentation, I see Simulink or LabVIEW(G) is pretty popular. In ESL design, I see that Agilent SystemVue is gaining some popularity. If you see the well established compiler theroy, almost 100% is about the textual language. But how about the graphical language? Is there any noticable research or discussion about the graphical programming language? In terms of Theory about Graphical Language - syntactic/semantic analysis and whatever relevant expressiveness (Actually, I asked a question about it at SO - http://stackoverflow.com/questions/2427496/what-do-you-mean-by-the-expressiveness-in-programming-lanuguage) Possibility of the Graphical language ... Or what do you think about the Graphical Programming Language?

    Read the article

  • Chrome Browser: Cookie lost on refresh

    - by Nirmal
    I am experiencing a strange behaviour of my application in Chrome browser (No problem with other browsers). When I refresh a page, the cookie is being sent properly, but intermittently the browser doesn't seem to pass the cookie on some refreshes. This is what I am using for page headers: header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1 header("Expires: Thu, 25 Nov 1982 08:24:00 GMT"); // Date in the past Do you see any issue here that might affect the cookie handling? Thank you for any suggestion.

    Read the article

  • [C#] Onpaint events (invalidated) changing execution order after a period normal operation (runtime)

    - by Luke Mcneice
    Hi all, I have 3 data graphs that are painted via the their paint events. When I have data that I need to insert into the graph I call the controls invalidate() command. The first control's paint event actually creates a bitmap buffer for the other 2 graphs to avoid repeating a long loop. So the invalidate commands are in a specific order (1,2,3). This works well, however when the graphed data reaches the end of the graph window (PictureBox) where the data would normally start scrolling, the paint events begin firing in the wrong order (2,3,1). has anyone came across this before? why might this be happening?

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • OpenID - How can I use my personal domain as an OpenID provider/forwarder?

    - by John Himmelman
    I read this comment in the OpenID post on the stackoverflow blog. Kibbee says : One nice feature of OpenID that I use is the ability to delegate the openID verification. So I can set up my own domain name, and then put a tiny bit of XML on that page that tells the site (like stackoverflow) to go to some other openid Provider (in my case MyOpenID). The big plus is that I have complete control over my Open ID account. If MyOpenID goes down, I can just switch to another provider. I think anybody who has their own domain name should go for this option. What is this tiny bit of XML that will allow my server to act as an openid provider/forwarder?

    Read the article

< Previous Page | 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210  | Next Page >