Search Results

Search found 68828 results on 2754 pages for 'knapsack problem'.

Page 589/2754 | < Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >

  • update datagridview using ajax in my asp.net without refreshing the page.(Display real time data)

    - by kurt_jackson19
    I need to display a real time data from MS SQL 2005. I saw some blogs that recommend Ajax to solve my problem. Basically, right now I have my default.aspx page only just for a workaround I could able to display the data from my DB. But once I add data manually to my DB there's no updating made. Any suggestions guys to fix this problem? I need to update datagridview with out refreshing the page. Here's my code on Default.aspx.cs using System; using System.Data; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.Data.SqlClient; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { FillDataGridView(); } protected void up1_Load(object sender, EventArgs e) { FillDataGridView(); } protected void FillDataGridView() { DataSet objDs = new DataSet(); SqlConnection myConnection = new SqlConnection (ConfigurationManager.ConnectionStrings["MainConnStr"].ConnectionString); SqlDataAdapter myCommand; string select = "SELECT * FROM Categories"; myCommand = new SqlDataAdapter(select, myConnection); myCommand.SelectCommand.CommandType = CommandType.Text; myConnection.Open(); myCommand.Fill(objDs); GridView1.DataSource = objDs; GridView1.DataBind(); } } Code on my Default.aspx <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Ajax Sample</title> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePageMethods="true"> <Scripts> <asp:ScriptReference Path="JScript.js" /> </Scripts> </asp:ScriptManager> <asp:UpdatePanel ID="UpdatePanel1" runat="server" OnLoad="up1_Load"> <ContentTemplate> <asp:GridView ID="GridView1" runat="server" Height="136px" Width="325px"/> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="GridView1" /> </Triggers> </asp:UpdatePanel> </form> </body> </html> My problem now is how to call or use the ajax.js and how to write a code to call the FillDataGridView() in my Default.aspx.cs page. Thank you guys, hope anyone can help me on this problem.

    Read the article

  • Common Mercator Projection formulas for Google Maps not working correctly

    - by Tom Halladay
    I am building a Tile Overlay server for Google maps in C#, and have found a few different code examples for calculating Y from Latitude. After getting them to work in general, I started to notice certain cases where the overlays were not lining up properly. To test this, I made a test harness to compare Google Map's Mercator LatToY conversion against the formulas I found online. As you can see below, they do not match in certain cases. Case #1 Zoomed Out: The problem is most evident when zoomed out. Up close, the problem is barely visible. Case #2 Point Proximity to Top & Bottom of viewing bounds: The problem is worse in the middle of the viewing bounds, and gets better towards the edges. This behavior can negate the behavior of Case #1 The Test: I created a google maps page to display red lines using the Google Map API's built in Mercator conversion, and overlay this with an image using the reference code for doing Mercator conversion. These conversions are represented as black lines. Compare the difference. The Results: Check out the top-most and bottom-most lines: The problem gets visually larger but numerically smaller as you zoom in: And it all but disappears at closer zoom levels, regardless of screen orientation. The Code: Google Maps Client Side Code: var lat = 0; for (lat = -80; lat <= 80; lat += 5) { map.addOverlay(new GPolyline([new GLatLng(lat, -180), new GLatLng(lat, 0)], "#FF0033", 2)); map.addOverlay(new GPolyline([new GLatLng(lat, 0), new GLatLng(lat, 180)], "#FF0033", 2)); } Server Side Code: Tile Cutter : http://mapki.com/wiki/Tile_Cutter OpenStreetMap Wiki : http://wiki.openstreetmap.org/wiki/Mercator protected override void ImageOverlay_ComposeImage(ref Bitmap ZipCodeBitMap) { Graphics LinesGraphic = Graphics.FromImage(ZipCodeBitMap); Int32 MapWidth = Convert.ToInt32(Math.Pow(2, zoom) * 255); Point Offset = Cartographer.Mercator2.toZoomedPixelCoords(North, West, zoom); TrimPoint(ref Offset, MapWidth); for (Double lat = -80; lat <= 80; lat += 5) { Point StartPoint = Cartographer.Mercator2.toZoomedPixelCoords(lat, -179, zoom); Point EndPoint = Cartographer.Mercator2.toZoomedPixelCoords(lat, -1, zoom); TrimPoint(ref StartPoint, MapWidth); TrimPoint(ref EndPoint, MapWidth); StartPoint.X = StartPoint.X - Offset.X; EndPoint.X = EndPoint.X - Offset.X; StartPoint.Y = StartPoint.Y - Offset.Y; EndPoint.Y = EndPoint.Y - Offset.Y; LinesGraphic.DrawLine(new Pen(Color.Black, 2), StartPoint.X, StartPoint.Y, EndPoint.X, EndPoint.Y); LinesGraphic.DrawString( lat.ToString(), new Font("Verdana", 10), new SolidBrush(Color.Black), new Point( Convert.ToInt32((width / 3.0) * 2.0), StartPoint.Y)); } } protected void TrimPoint(ref Point point, Int32 MapWidth) { point.X = Math.Max(point.X, 0); point.X = Math.Min(point.X, MapWidth - 1); point.Y = Math.Max(point.Y, 0); point.Y = Math.Min(point.Y, MapWidth - 1); } So, Anyone ever experienced this? Dare I ask, resolved this? Or simply have a better C# implementation of Mercator Project coordinate conversion? Thanks!

    Read the article

  • Writing more efficient xquery code (avoiding redundant iteration)

    - by Coquelicot
    Here's a simplified version of a problem I'm working on: I have a bunch of xml data that encodes information about people. Each person is uniquely identified by an 'id' attribute, but they may go by many names. For example, in one document, I might find <person id=1>Paul Mcartney</person> <person id=2>Ringo Starr</person> And in another I might find: <person id=1>Sir Paul McCartney</person> <person id=2>Richard Starkey</person> I want to use xquery to produce a new document that lists every name associated with a given id. i.e.: <person id=1> <name>Paul McCartney</name> <name>Sir Paul McCartney</name> <name>James Paul McCartney</name> </person> <person id=2> ... </person> The way I'm doing this now in xquery is something like this (pseudocode-esque): let $ids := distinct-terms( [all the id attributes on people] ) for $id in $ids return <person id={$id}> { for $unique-name in distinct-values ( for $name in ( [all names] ) where $name/@id=$id return $name ) return <name>{$unique-name}</name> } </person> The problem is that this is really slow. I imagine the bottleneck is the innermost loop, which executes once for every id (of which there are about 1200). I'm dealing with a fair bit of data (300 MB, spread over about 800 xml files), so even a single execution of the query in the inner loop takes about 12 seconds, which means that repeating it 1200 times will take about 4 hours (which might be optimistic - the process has been running for 3 hours so far). Not only is it slow, it's using a whole lot of virtual memory. I'm using Saxon, and I had to set java's maximum heap size to 10 GB (!) to avoid getting out of memory errors, and it's currently using 6 GB of physical memory. So here's how I'd really like to do this (in Pythonic pseudocode): persons = {} for id in ids: person[id] = set() for person in all_the_people_in_my_xml_document: persons[person.id].add(person.name) There, I just did it in linear time, with only one sweep of the xml document. Now, is there some way to do something similar in xquery? Surely if I can imagine it, a reasonable programming language should be able to do it (he said quixotically). The problem, I suppose, is that unlike Python, xquery doesn't (as far as I know) have anything like an associative array. Is there some clever way around this? Failing that, is there something better than xquery that I might use to accomplish my goal? Because really, the computational resources I'm throwing at this relatively simple problem are kind of ridiculous.

    Read the article

  • Minimum-Waste Print Job Grouping Algorithm?

    - by Matt Mc
    I work at a publishing house and I am setting up one of our presses for "ganging", in other words, printing multiple jobs simultaneously. Given that different print jobs can have different quantities, and anywhere from 1 to 20 jobs might need to be considered at a time, the problem would be to determine which jobs to group together to minimize waste (waste coming from over-printing on smaller-quantity jobs in a given set, that is). Given the following stable data: All jobs are equal in terms of spatial size--placement on paper doesn't come into consideration. There are three "lanes", meaning that three jobs can be printed simultaneously. Ideally, each lane has one job. Part of the problem is minimizing how many lanes each job is run on. If necessary, one job could be run on two lanes, with a second job on the third lane. The "grouping" waste from a given set of jobs (let's say the quantities of them are x, y and z) would be the highest number minus the two lower numbers. So if x is the higher number, the grouping waste would be (x - y) + (x - z). Otherwise stated, waste is produced by printing job Y and Z (in excess of their quantities) up to the quantity of X. The grouping waste would be a qualifier for the given set, meaning it could not exceed a certain quantity or the job would simply be printed alone. So the question is stated: how to determine which sets of jobs are grouped together, out of any given number of jobs, based on the qualifiers of 1) Three similar quantities OR 2) Two quantities where one is approximately double the other, AND with the aim of minimal total grouping waste across the various sets. (Edit) Quantity Information: Typical job quantities can be from 150 to 350 on foreign languages, or 500 to 1000 on English print runs. This data can be used to set up some scenarios for an algorithm. For example, let's say you had 5 jobs: 1000, 500, 500, 450, 250 By looking at it, I can see a couple of answers. Obviously (1000/500/500) is not efficient as you'll have a grouping waste of 1000. (500/500/450) is better as you'll have a waste of 50, but then you run (1000) and (250) alone. But you could also run (1000/500) with 1000 on two lanes, (500/250) with 500 on two lanes and then (450) alone. In terms of trade-offs for lane minimization vs. wastage, we could say that any grouping waste over 200 is excessive. (End Edit) ...Needless to say, quite a problem. (For me.) I am a moderately skilled programmer but I do not have much familiarity with algorithms and I am not fully studied in the mathematics of the area. I'm I/P writing a sort of brute-force program that simply tries all options, neglecting any option tree that seems to have excessive grouping waste. However, I can't help but hope there's an easier and more efficient method. I've looked at various websites trying to find out more about algorithms in general and have been slogging my way through the symbology, but it's slow going. Unfortunately, Wikipedia's articles on the subject are very cross-dependent and it's difficult to find an "in". The only thing I've been able to really find would seem to be a definition of the rough type of algorithm I need: "Exclusive Distance Clustering", one-dimensionally speaking. I did look at what seems to be the popularly referred-to algorithm on this site, the Bin Packing one, but I was unable to see exactly how it would work with my problem. Any help is appreciated. :)

    Read the article

  • Forcing a transaction to rollback on validation errors in Seam

    - by Chris Williams
    Quick version: We're looking for a way to force a transaction to rollback when specific situations occur during the execution of a method on a backing bean but we'd like the rollback to happen without having to show the user a generic 500 error page. Instead, we'd like the user to see the form she just submitted and a FacesMessage that indicates what the problem was. Long version: We've got a few backing beans that use components to perform a few related operations in the database (using JPA/Hibernate). During the process, an error can occur after some of the database operations have happened. This could be for a few different reasons but for this question, let's assume there's been a validation error that is detected after some DB writes have happened that weren't detectible before the writes occurred. When this happens, we'd like to make sure all of the db changes up to this point will be rolled back. Seam can deal with this because if you throw a RuntimeException out of the current FacesRequest, Seam will rollback the current transaction. The problem with this is that the user is shown a generic error page. In our case, we'd actually like the user to be shown the page she was on with a descriptive message about what went wrong, and have the opportunity to correct the bad input that caused the problem. The solution we've come up with is to throw an Exception from the component that discovers the validation problem with the annotation: @ApplicationException( rollback = true ) Then our backing bean can catch this exception, assume the component that threw it has published the appropriate FacesMessage, and simply return null to take the user back to the input page with the error displayed. The ApplicationException annotation tells Seam to rollback the transaction and we're not showing the user a generic error page. This worked well for the first place we used it that happened to only be doing inserts. The second place we tried to use it, we have to delete something during the process. In this second case, everything works if there's no validation error. If a validation error does happen, the rollback Exception is thrown and the transaction is marked for rollback. Even if no database modifications have happened to be rolled back, when the user fixes the bad data and re-submits the page, we're getting: java.lang.IllegalArgumentException: Removing a detached instance The detached instance is lazily loaded from another object (there's a many to one relationship). That parent object is loaded when the backing bean is instantiated. Because the transaction was rolled back after the validation error, the object is now detached. Our next step was to change this page from conversation scope to page scope. When we did this, Seam can't even render the page after the validation error because our page has to hit the DB to render and the transaction has been marked for rollback. So my question is: how are other people dealing with handling errors cleanly and properly managing transactions at the same time? Better yet, I'd love to be able to use everything we have now if someone can spot something I'm doing wrong that would be relatively easy to fix. I've read the Seam Framework article on Unified error page and exception handling but this is geared more towards more generic errors your application might encounter.

    Read the article

  • How can I implement the Gale-Shapley stable marriage algorithm in Perl?

    - by srk
    Problem : We have equal number of men and women.each men has a preference score toward each woman. So do the woman for each man. each of the men and women have certain interests. Based on the interest we calculate the preference scores. So initially we have an input in a file having x columns. First column is the person(men/woman) id. id are nothing but 0.. n numbers.(first half are men and next half woman) the remaining x-1 columns will have the interests. these are integers too. now using this n by x-1 matrix... we have come up with a n by n/2 matrix. the new matrix has all men and woman as their rows and scores for opposite sex in columns. We have to sort the scores in descending order, also we need to know the id of person related to the scores after sorting. So here i wanted to use hash table. once we get the scores we need to make up pairs.. for which we need to follow some rules. My trouble is with the second matrix of n by n/2 that needs to give information of which man/woman has how much preference on a woman/man. I need these scores sorted so that i know who is the first preferred woman/man, 2nd preferred and so on for a man/woman. I hope to get good suggestions on the data structures i use.. I prefer php or perl. Thank you in advance Hey guys this is not an home work. This a little modified version of stable marriage algorithm. I have working solution. I am only working on optimizing my code. more info: It is very similar to stable marriage problem but here we need to calculate the scores based on the interests they share. So i have implemented it as the way you see in the wiki page http://en.wikipedia.org/wiki/Stable_marriage_problem. my problem is not solving the problem. i solved it and can run it. I am just trying to have a better solution. so i am asking suggestions on the type of data structure to use. Conceptually I tried using an array of hashes. where the array index give the person id and the hash in it gives the id's <= score's in sorted manner. I initially start with an array of hashes. now i sort the hashes on values, but i could not store the sorted hashes back in an array.So just stored the keys after sorting and used these to get the values from my initial unsorted hashes. Can we store the hashes after sorting ? Can you suggest a better structure ?

    Read the article

  • How to generate a Program template by generating an abstract class

    - by Byron-Lim Timothy Steffan
    i have the following problem. The 1st step is to implement a program, which follows a specific protocol on startup. Therefore, functions as onInit, onConfigRequest, etc. will be necessary. (These are triggered e.g. by incoming message on a TCP Port) My goal is to generate a class for example abstract one, which has abstract functions as onInit(), etc. A programmer should just inherit from this base class and should merely override these abstract functions of the base class. The rest as of the protocol e.g. should be simply handled in the background (using the code of the base class) and should not need to appear in the programmers code. What is the correct design strategy for such tasks? and how do I deal with, that the static main method is not inheritable? What are the key-tags for this problem? (I have problem searching for a solution since I lack clear statements on this problem) Goal is to create some sort of library/class, which - included in ones code - results in executables following the protocol. EDIT (new explanation): Okay let me try to explain more detailled: In this case programs should be clients within a client server architecture. We have a client server connection via TCP/IP. Each program needs to follow a specific protocol upon program start: As soon as my program starts and gets connected to the server it will receive an Init Message (TcpClient), when this happens it should trigger the function onInit(). (Should this be implemented by an event system?) After onInit() a acknowledgement message should be sent to the server. Afterwards there are some other steps as e.g. a config message from the server which triggers an onConfig and so on. Let's concentrate on the onInit function. The idea is, that onInit (and onConfig and so on) should be the only functions the programmer should edit while the overall protocol messaging is hidden for him. Therefore, I thought using an abstract class with the abstract methods onInit(), onConfig() in it should be the right thing. The static Main class I would like to hide, since within it e.g. there will be some part which connects to the tcp port, which reacts on the Init Message and which will call the onInit function. 2 problems here: 1. the static main class cant be inherited, isn it? 2. I cannot call abstract functions from the main class in the abstract master class. Let me give an Pseudo-example for my ideas: public abstract class MasterClass { static void Main(string[] args){ 1. open TCP connection 2. waiting for Init Message from server 3. onInit(); 4. Send Acknowledgement, that Init Routine has ended successfully 5. waiting for Config message from server 6..... } public abstract void onInit(); public abstract void onConfig(); } I hope you get the idea now! The programmer should afterwards inherit from this masterclass and merely need to edit the functions onInit and so on. Is this way possible? How? What else do you recommend for solving this? EDIT: The strategy ideo provided below is a good one! Check out my comment on that.

    Read the article

  • Ruby on Rails is complaining about a method that doesn't exist that is built into Active Record. Wha

    - by grg-n-sox
    This will probably just be a simple problem and I am just blind or an idiot but I could use some help. So I am going over some basic guides in Rails, reviewing the basics and such for an upcoming exam. One of the guides included was the sort-of-standard getting started guide over at guide.rubyonrails.org. Here is the link if you need it. Also all my code is for my app is from there, so I have no problem releasing any of my code since it should be the same as shown there. I didn't do a copy paste, but I basically was typing with Vim in one half of my screen and the web page in the other half, typing what I see. http://guides.rubyonrails.org/getting_started.html So like I said, I am going along the guide when I noticed past a certain point in the tutorial, I was always getting an error on the site. To find the section of code, just hit Ctrl+f on the page (or whatever you have search/find set to) and enter "accepts_". This should immediately direct you to this chunk of code. class Post < ActiveRecord::Base validates_presence_of :name, :title validates_length_of :title, :minimum => 5 has_many :comments has_many :tags accepts_nested_attributes_for :tags, :allow_destroy => :true , :reject_if => proc { |attrs| attrs.all? { |k, v| v.blank? } } end So I tried putting this in my code. It is in ~/Rails/blog/app/models/post.rb in case you are wondering. However, even after all the other code I put in past that in the guide, hoping I was just missing some line of code that would come up later in the guide. But nothing, same error every time. This is what I get. NoMethodError in PostsController#index undefined method `accepts_nested_attributes_for' for #<Class:0xb7109f98> /usr/lib/ruby/gems/1.8/gems/activerecord-2.2.2/lib/active_record/base.rb:1833:in `method_missing' app/models/post.rb:7 app/controllers/posts_controller.rb:9:in `index' Request Parameters: None Response Headers: {"Content-Type"=>"", "cookie"=>[], "Cache-Control"=>"no-cache"} Now, I copied the above code from the guide. The two code sections I edited mentioned in the error message I will paste as is below. class PostsController < ApplicationController # GET /posts # GET /posts.xml before_filter :find_post, :only => [:show, :edit, :update, :destroy] def index @posts = Post.find(:all) # <= the line 9 referred to in error message respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } end end class Post < ActiveRecord::Base validates_presence_of :name, :title validates_length_of :title, :minimum => 5 has_many :comments has_many :tags accepts_nested_attributes_for :tags, :allow_destroy => :true , # <= problem :reject_if => proc { |attrs| attrs.all? { |k, v| v.blank? } } end Also here is gem local gem list. I do note that they are a bit out of date, but the default Rails install any of the school machines (an environment likely for my exam) is basically 'gem install rails --version 2.2.2' and since they are windows machines, they come with all the normal windows ruby gems that comes with the ruby installer. However, I am running this off a Debian virtual machine of mine, but trying to set it up similarly and I figured the windows ruby gems wouldn't change anything in Rails. *** LOCAL GEMS *** actionmailer (2.2.2) actionpack (2.2.2) activerecord (2.2.2) activeresource (2.2.2) activesupport (2.2.2) gem_plugin (0.2.3) hpricot (0.8.2) linecache (0.43) log4r (1.1.7) ptools (1.1.9) rack (1.1.0) rails (2.2.2) rake (0.8.7) sqlite3-ruby (1.2.3) So any ideas on what the problem is? Thanks in advanced.

    Read the article

  • Did I find a bug in PHP's `crypt()`?

    - by Nathan Long
    I think I may have found a bug in PHP's crypt() function under Windows. However: I recognize that it's probably my fault. PHP is used by millions and worked on by thousands; my code is used by tens and worked on by me. (This argument is best explained on Coding Horror.) So I'm asking for help: show me my fault. I've been trying to find it for a few days now, with no luck. The setup I'm using a Windows server installation with Apache 2.2.14 (Win32) and PHP 5.3.2. My development box runs Windows XP Professional; the 'production' server (this is an intranet setup) runs Windows Storage Server 2003. The problem happens on both. I don't see anything in php.ini related to crypt(), but will happily answer questions about my config. The problem Several scripts in my PHP app occasionally hang: the page sits there on 'waiting for localhost' and never finishes. Each of these scripts uses crypt to hash a user's password before storing it in the database, or, in the case of the login page, to hash the entered password before comparing it to the version stored in the database. Since the login page is the simplest, I focused on it for testing. I repeatedly logged in, and found that it would hang maybe 4 out of 10 times. As an experiment, I changed the login page to use the plain text password and changed my password in the database to its plain text version. The page stopped hanging. I saw that PHP's latest version lists this bugfix: Fixed bug #51059 (crypt crashes when invalid salt are [sic] given). So I created a very simple test script, as follows, using the same salt given in an official example: $foo = crypt('rasmuslerdorf','r1'); echo $foo; This page, too, will hang, if I reload it like crazy. I only see it hanging in Chrome, but regardless of browser, the effect on Apache is the same. Effect on Apache When these pages hang, Apache's server-status page (which I explained here, regarding a different problem) increments the number of requests being processed and decrements the number of idle workers. The requests being processed almost all have a status of 'Sending Reply,' though sometimes for a moment they will show either 'Reading request' or 'keepalive (read).' Eventually, Apache may crash. When it does, the Windows crash report looks like this: szAppName: httpd.exe szAppVer: 2.2.14.0 szModName: php5ts.dll szModVer: 5.3.1.0 // OK, this report was before I upgraded to PHP 5.3.2, // but that didn't fix it offset: 00a2615 Is it my fault? I'm tempted to file a bug report to PHP on this. The argument against it is, as stated above, that bugs are nearly always my fault. However, my argument in favor of 'it's PHP's fault' is: I'm using Windows, whereas most servers use Linux (I don't get to choose this), so the chances are greater that I've found an edge case There was recently a bug with crypt(), so maybe it still has issues I have made the simplest test case I can, and I still have the problem Can anyone duplicate this? Can you suggest where I've gone wrong? Should I file the bug after all? Thanks in advance for any help you may give.

    Read the article

  • xsl:include fails after upgrading PHP version; is libxml/libxslt version mismatch the issue?

    - by Wiseguy
    I'm running Windows XP with the precompiled PHP binaries available from windows.php.net. I upgraded from PHP 5.2.5 to PHP 5.2.16, and now the xsl:includes in some of my stylesheets stopped working. Testing each version in succession, I discovered that it worked up through 5.2.8 and does not work in 5.2.9+. I now get the following three errors for each xsl:include. Warning: XSLTProcessor::importStylesheet() [xsltprocessor.importstylesheet]: I/O warning : failed to load external entity "file%3A/C%3A/path/to/included/stylesheet.xsl" in ... on line 227 Warning: XSLTProcessor::importStylesheet() [xsltprocessor.importstylesheet]: compilation error: file file%3A//C%3A/path/to/included/stylesheet.xsl line 36 element include in ... on line 227 Warning: XSLTProcessor::importStylesheet() [xsltprocessor.importstylesheet]: xsl:include : unable to load file%3A/C%3A/path/to/included/stylesheet.xsl in ... on line 227 I presume this is because it cannot find the specified file. Many of the includes are in the same directory as the stylesheet being transformed and have no directory in the path, i.e. <xsl:include href="fileInSameDir.xsl">. Interestingly, in the first and third errors, it is displaying the file:// protocol with only one slash instead of the correct two. I'm guessing that's the problem. (When I hard-code a full path using "file:/" it fails, but when I hard-code a full path with "file://" it works.) But what could cause that? A bug in libxslt/libxml? I also found an apparent version mismatch between libxml and the version of libxml that libxslt was compiled against. 5.2.5 libxml Version = 2.6.26 libxslt compiled against libxml Version = 2.6.26 5.2.8 libxml Version = 2.6.32 libxslt compiled against libxml Version = 2.6.32 === it breaks in versions starting at 5.2.9 === 5.2.9 libxml Version = 2.7.3 libxslt compiled against libxml Version = 2.6.32 5.2.16 libxml Version = 2.7.7 libxslt compiled against libxml Version = 2.6.32 Up until PHP 5.2.9, libxslt was compiled against the same version of libxml that was included with PHP. But starting with PHP 5.2.9, libxslt was compiled against an older version of libxml than that which was included with PHP. Is this a problem with the distributed binaries or just a coincidence? To test this, I imagine PHP could be built with different versions of libxml/libxslt to see which combinations work or don't. Unfortunately, I'm out of my element in a Windows world, and building PHP on Windows seems over my head. Regrettably, I've been thus far unable to reproduce this problem with an example outside my app, so I'm struggling to narrow it down and can't submit a specific bug. So, do you think it is caused by a version mismatch problem in the distributed binaries? a bug introduced in PHP 5.2.9? a bug introduced in libxml 2.7? something else? I'm stumped. Any thoughts that could point me in the right direction are greatly appreciated. Thanks.

    Read the article

  • What can I do to improve a project if there is a no-listening situation. Developers vs Management

    - by NazGul
    Hi all, I hope that I'm not the only one and I can get a answer from someone with more experience than me, so I can think cleaner and I don't get depressed with this developer's life. I'm working as developer for a small company three years now. In that three years I'm working in the same project and sincerely, I think this project could be used as a CASE STUDY because it has all the situations that cannot happen in a project and that makes a project fails. To begin with, and I believe you've already noticed, the project has 3 years already (develoment only) and is still unfinished, because in every meeting there is a "new priority" ,or a "new problem" to be solve or a "new feature" to be add. So, first problem is no target set. How can you know when something is finished if you don't know what you want? I understand Management, because they see an oportunity and try to get that, but I don't understand how can they not see (or hear us) that they'll lose all they already have and what they'll eventually get. Second, there is no team group. My team consists of three people, a Senior Developer, a DBA and, finally, I for all the work (support, testing, new features, bug fixing, meeting, projet management of clients, etc) aka Junior Developer. The first (senior developer), does not perform any tests on his changes, so, most of the time, his changes give us problems (us = me, since I'm the one who will fix it). The second (DBA) is an uncompromising person and you can not talk to him, believe me, I tried! In his view, everything he does is fantastic... even if it is the most complicated to make it... And he does everything he wants, even if we need that only for 5 months later and would help some extra-hand to do the things we have to do for now. As you can see, there is very hard to work with no help... Third, there is no testings. Every... I repeat, Every release of the project, the customers wants to kill us, because there is a lot of bugs. Management? They say that they want tests before the release. Us? We say the same. Time? No time. Management? There is always some time to open the application and click in some buttons. Us? Try to explain that it is not so simple. Management doesn't care... end of story. Actually, must of the bugs could be avoid with a rigorous work... Some people just want to do the show to the Management. "Did you ask for this? Cool, it's done. Bugs? The Do-all-the-work guy will solve." Unfortunally for me, sometimes the Do-all-the-work also has to finish it. And to makes this all better, I'm the person who will listen the complaints from the customers. Cool, huh? I know, everyone makes mistakes. But there is mistakes and mistakes... To complete, in the Management view, "the problem is the lack of an individual project management", because we cannot do all the stuff they ask, even if there is no PM for the project itself. And ask us to work overtime without any reward... I do say all this stuff to the management and others members, but by telling this, the I'm the bad guy, the guy who is complain when everything is going well... but we need to work overtime... sigh What can I do to make it works? Anyone has a situation like this, what did you do? I hope you could understand my problem, my English is a little rusty. Thanks.

    Read the article

  • How to stop OpenGL from applying blending to certain content? (see pics)

    - by RexOnRoids
    Supporting Info: I use cocos2d to draw a sprite (graph background) on the screen (z:-1). I then use cocos2d to draw lines/points (z:0) on top of the background -- and make some calls to OpenGL blending functions before the drawing to SMOOTH out the lines. Problem: The problem is that: aside from producing smooth lines/points, calling these OpenGL blending functions seems to "degrade" the underlying sprite (graph background). As you can see from the images below, the "degraded" background seems to be made darker and less sharp in Case 2. So there is a tradeoff: I can either have (Case 1) a nice background and choppy lines/points, or I can have (Case 2) nice smooth lines/points and a degraded background. But obviously I need both. THE QUESTION: How do I set OpenGL so as to only apply the blending to the layer with the Lines/Points in it and thus leave the background alone? The Code: I have included code of the draw() method of the CCLayer for both cases explained above. As you can see, the code producing the difference between Case 1 and Case 2 seems to be 1 or 2 lines involving OpenGL Blending. Case 1 -- MainScene.h (CCLayer): -(void)draw{ int lastPointX = 0; int lastPointY = 0; GLfloat colorMAX = 255.0f; GLfloat valR; GLfloat valG; GLfloat valB; if([self.myGraphManager ready]){ valR = (255.0f/colorMAX)*1.0f; valG = (255.0f/colorMAX)*1.0f; valB = (255.0f/colorMAX)*1.0f; NSEnumerator *enumerator = [[self.myGraphManager.currentCanvas graphPoints] objectEnumerator]; GraphPoint* object; while ((object = [enumerator nextObject])) { if(object.filled){ /*Commenting out the following two lines induces a problem of making it impossible to have smooth lines/points, but has merit in that it does not degrade the background sprite.*/ //glEnable (GL_BLEND); //glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE); glEnable (GL_LINE_SMOOTH); glLineWidth(1.5f); glColor4f(valR, valG, valB, 1.0); ccDrawLine(ccp(lastPointX, lastPointY), ccp(object.position.x, object.position.y)); lastPointX = object.position.x; lastPointY = object.position.y; glPointSize(3.0f); glEnable(GL_POINT_SMOOTH); glHint(GL_POINT_SMOOTH_HINT, GL_NICEST); ccDrawPoint(ccp(lastPointX, lastPointY)); } } } } Case 2 -- MainScene.h (CCLayer): -(void)draw{ int lastPointX = 0; int lastPointY = 0; GLfloat colorMAX = 255.0f; GLfloat valR; GLfloat valG; GLfloat valB; if([self.myGraphManager ready]){ valR = (255.0f/colorMAX)*1.0f; valG = (255.0f/colorMAX)*1.0f; valB = (255.0f/colorMAX)*1.0f; NSEnumerator *enumerator = [[self.myGraphManager.currentCanvas graphPoints] objectEnumerator]; GraphPoint* object; while ((object = [enumerator nextObject])) { if(object.filled){ /*Enabling the following two lines gives nice smooth lines/points, but has a problem in that it degrades the background sprite.*/ glEnable (GL_BLEND); glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE); glEnable (GL_LINE_SMOOTH); glLineWidth(1.5f); glColor4f(valR, valG, valB, 1.0); ccDrawLine(ccp(lastPointX, lastPointY), ccp(object.position.x, object.position.y)); lastPointX = object.position.x; lastPointY = object.position.y; glPointSize(3.0f); glEnable(GL_POINT_SMOOTH); glHint(GL_POINT_SMOOTH_HINT, GL_NICEST); ccDrawPoint(ccp(lastPointX, lastPointY)); } } } }

    Read the article

  • Lockless queue implementation ends up having a loop under stress

    - by Fozi
    I have lockless queues written in C in form of a linked list that contains requests from several threads posted to and handled in a single thread. After a few hours of stress I end up having the last request's next pointer pointing to itself, which creates an endless loop and locks up the handling thread. The application runs (and fails) on both Linux and Windows. I'm debugging on Windows, where my COMPARE_EXCHANGE_PTR maps to InterlockedCompareExchangePointer. This is the code that pushes a request to the head of the list, and is called from several threads: void push_request(struct request * volatile * root, struct request * request) { assert(request); do { request->next = *root; } while(COMPARE_EXCHANGE_PTR(root, request, request->next) != request->next); } This is the code that gets a request from the end of the list, and is only called by a single thread that is handling them: struct request * pop_request(struct request * volatile * root) { struct request * volatile * p; struct request * request; do { p = root; while(*p && (*p)->next) p = &(*p)->next; // <- loops here request = *p; } while(COMPARE_EXCHANGE_PTR(p, NULL, request) != request); assert(request->next == NULL); return request; } Note that I'm not using a tail pointer because I wanted to avoid the complication of having to deal with the tail pointer in push_request. However I suspect that the problem might be in the way I find the end of the list. There are several places that push a request into the queue, but they all look generaly like this: // device->requests is defined as struct request * volatile requests; struct request * request = malloc(sizeof(struct request)); if(request) { // fill out request fields push_request(&device->requests, request); sem_post(device->request_sem); } The code that handles the request is doing more than that, but in essence does this in a loop: if(sem_wait_timeout(device->request_sem, timeout) == sem_success) { struct request * request = pop_request(&device->requests); // handle request free(request); } I also just added a function that is checking the list for duplicates before and after each operation, but I'm afraid that this check will change the timing so that I will never encounter the point where it fails. (I'm waiting for it to break as I'm writing this.) When I break the hanging program the handler thread loops in pop_request at the marked position. I have a valid list of one or more requests and the last one's next pointer points to itself. The request queues are usually short, I've never seen more then 10, and only 1 and 3 the two times I could take a look at this failure in the debugger. I thought this through as much as I could and I came to the conclusion that I should never be able to end up with a loop in my list unless I push the same request twice. I'm quite sure that this never happens. I'm also fairly sure (although not completely) that it's not the ABA problem. I know that I might pop more than one request at the same time, but I believe this is irrelevant here, and I've never seen it happening. (I'll fix this as well) I thought long and hard about how I can break my function, but I don't see a way to end up with a loop. So the question is: Can someone see a way how this can break? Can someone prove that this can not? Eventually I will solve this (maybe by using a tail pointer or some other solution - locking would be a problem because the threads that post should not be locked, I do have a RW lock at hand though) but I would like to make sure that changing the list actually solves my problem (as opposed to makes it just less likely because of different timing).

    Read the article

  • Creating an element and insertBefore is not working...

    - by sologhost
    Ok, I've been banging my head up against the wall on this and I have no clue why it isn't creating the element. Maybe something very small that I overlooked here. Basically, there is this Javascript code that is in a PHP document being outputted, like somewhere in the middle of when the page gets loaded, NOW, unfortunately it can't go into the header. Though I'm not sure that that is the problem anyways, but perhaps it is... hmmmmm. // Setting the variables needed to be set. echo ' <script type="text/javascript" src="' . $settings['default_theme_url'] . '/scripts/dpShoutbox.js"></script>'; echo ' <script type="text/javascript"> var refreshRate = ', $params['refresh_rate'], '; createEventListener(window); window.addEventListener("load", loadShouts, false); function loadShouts() { var alldivs = document.getElementsByTagName(\'div\'); var shoutCount = 0; var divName = "undefined"; for (var i = 0; i<alldivs.length; i++) { var is_counted = 0; divName = alldivs[i].getAttribute(\'name\'); if (divName.indexOf(\'dp_Reserved_Shoutbox\') < 0 && divName.indexOf(\'dp_Reserved_Counted\') < 0) continue; else if(divName == "undefined") continue; else { if (divName.indexOf(\'dp_Reserved_Counted\') == 0) { is_counted = 0; shoutCount++; continue; } else { shoutCount++; is_counted = 1; } } // Empty out the name attr. alldivs[i].name = \'dp_Reserved_Counted\'; var shoutId = \'shoutbox_area\' + shoutCount; // Build the div to be inserted. var shoutHolder = document.createElement(\'div\'); shoutHolder.setAttribute(\'id\', [shoutId]); shoutHolder.setAttribute(\'class\', \'dp_control_flow\'); shoutHolder.style.cssText = \'padding-right: 6px;\'; alldivs[i].parentNode.insertBefore(shoutHolder, alldivs[i]); if (is_counted == 1) { startShouts(refreshRate, shoutId); break; } } } </script>'; Also, I'm sure the other functions that I'm linking to within these functions work just fine. The problem here is that within this function, the div never gets created at all and I can't understand why? Furthermore Firefox, FireBug is telling me that the variable divName is undefined, even though I have attempted to take care of this within the function, though not sure why. Anyways, I need the created div element to be inserted just before the following HTML: echo ' <div name="dp_Reserved_Shoutbox" style="padding-bottom: 9px;"></div>'; I'm using name here instead of id because I don't want duplicate id values which is why I'm changing the name value and incrementing, since this function may be called more than 1 time. For example if there are 3 shoutboxes on the same page (Don't ask why...lol), I need to skip the other names that I already changed to "dp_Reserved_Counted", which I believe I am doing correctly. In any case, if I could I would place this into the header and have it called just once, but this isn't possible as these are loaded and no way of telling which one's they are, so it's directly hard-coded into the actual output on the page of where the shoutbox is within the HTML. Basically, not sure if that is the problem or not, but there must be some sort of work-around, unless the problem is within my code above... arrg Please help me. Thanks :)

    Read the article

  • Received memory warning on setimage

    - by Sam Budda
    This problem has completely stumped me. This is for iOS 5.0 with Xcode 4.2 What's going on is that in my app I let user select images from their photo album and I save those images to apps document directory. Pretty straight forward. What I do then is that in one of the viewController.m files I create multiple UIImageViews and I then set the image for the image view from one of the picture that user selected from apps dir. The problem is that after a certain number of UIImage sets I receive a "Received memory warning". It usually happens when there are 10 pictures. If lets say user selected 11 pictures then the app crashes with Error (GBC). NOTE: each of these images are at least 2.5 MB a piece. After hours of testing I finally narrowed down the problem to this line of code [button1AImgVw setImage:image]; If I comment out that code. All compiles fine and no memory errors happen. But if I don't comment out that code I receive memory errors and eventually a crash. Also note it does process the whole CreateViews IBAction but still crashes at the end. I cannot do release or dealloc since I am running this on iOS 5.0 with Xcode 4.2 Here is the code that I used. Can anyone tell me what did I do wrong? - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. [self CreateViews]; } -(IBAction) CreateViews { paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask ,YES); documentsPath = [paths objectAtIndex:0]; //here 15 is for testing purposes for (int i = 0; i < 15; i++) { //Lets not get bogged down here. The problem is not here UIImageView *button1AImgVw = [[UIImageView alloc] initWithFrame:CGRectMake(10*i, 10, 10, 10)]; [self.view addSubview:button1AImgVw]; NSMutableString *picStr1a = [[NSMutableString alloc] init]; NSString *dataFile1a = [[NSString alloc] init]; picStr1a = [NSMutableString stringWithFormat:@"%d.jpg", i]; dataFile1a = [documentsPath stringByAppendingPathComponent:picStr1a]; NSData *potraitImgData1a =[[NSData alloc] initWithContentsOfFile:dataFile1a]; UIImage *image = [[UIImage alloc] initWithData:potraitImgData1a]; // This is causing my app to crash if I load more than 10 images! //[button1AImgVw setImage:image]; } NSLog(@"It went to END!"); } //Error I get when 10 images are selected. App does launch and work 2012-10-07 17:12:51.483 ABC-APP[7548:707] It went to END! 2012-10-07 17:12:51.483 ABC-APP [7531:707] Received memory warning. //App crashes with this error when there are 11 images 2012-10-07 17:30:26.339 ABC-APP[7548:707] It went to END! (gdb)

    Read the article

  • mysql_close doesn't kill locked sql requests

    - by Nikita
    I use mysqld Ver 5.1.37-2-log for debian-linux-gnu I perform mysql calls from c++ code with functions mysql_query. The problem occurs when mysql_query execute procedure, procedure locked on locked table, so mysql_query hangs. If send kill signal to application then we can see lock until table is locked. Create the following SQL table and procedure CREATE TABLE IF NOT EXISTS `tabletolock` ( `id` INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`) )ENGINE = InnoDB; DELIMITER $$ DROP PROCEDURE IF EXISTS `LOCK_PROCEDURE` $$ CREATE PROCEDURE `LOCK_PROCEDURE`() BEGIN SELECT id INTO @id FROM tabletolock; END $$ DELOMITER; There are sql commands to reproduce the problem: 1. in one terminal execute lock tables tabletolock write; 2. in another terminal execute call LOCK_PROCEDURE(); 3. In first terminal exeute show processlist and see | 2492 | root | localhost | syn_db | Query | 12 | Locked | SELECT id INTO @id FROM tabletolock | Then perfrom Ctrl-C in second terminal to interrupt our procudere and see processlist again. It is not changed, we already see locked select request and can teminate it by unlock tables or kill commands. Problem described is occured with mysql command line client. Also such problem exists when we use functions mysql_query and mysql_close. Example of c code: #include <iostream> #include <mysql/mysql.h> #include <mysql/errmsg.h> #include <signal.h> // g++ -Wall -g -fPIC -lmysqlclient dbtest.cpp using namespace std; MYSQL * connection = NULL; void closeconnection() { if(connection != NULL) { cout << "close connection !\n"; mysql_close(connection); mysql_thread_end(); delete connection; mysql_library_end(); } } void sigkill(int s) { closeconnection(); signal(SIGINT, NULL); raise(s); } int main(int argc, char ** argv) { signal(SIGINT, sigkill); connection = new MYSQL; mysql_init(connection); mysql_options(connection, MYSQL_READ_DEFAULT_GROUP, "nnfc"); if (!mysql_real_connect(connection, "127.0.0.1", "user", "password", "db", 3306, NULL, CLIENT_MULTI_RESULTS)) { delete connection; cout << "cannot connect\n"; return -1; } cout << "before procedure call\n"; mysql_query(connection, "CALL LOCK_PROCEDURE();"); cout << "after procedure call\n"; closeconnection(); return 0; } Compile it, and perform the folloing actions: 1. in first terminal local tables tabletolock write; 2. run program ./a.out 3. interrupt program Ctrl-C. on the screen we see that closeconnection function is called, so connection is closed. 4. in first terminal execute show processlist and see that procedure was not intrrupted. My question is how to terminate such locked calls from c code? Thank you in advance!

    Read the article

  • C# IOException: The process cannot access the file because it is being used by another process.

    - by Michiel Bester
    Hi, I have a slight problem. What my application is supose to do, is to watch a folder for any newly copied file with the extention '.XSD' open the file and assign the lines to an array. After that the data from the array should be inserted into a MySQL database, then move the used file to another folder if it's done. The problem is that the application works fine with the first file, but as soon as the next file is copied to the folder I get this exception for example: 'The process cannot access the file 'C:\inetpub\admission\file2.XPD' because it is being used by another process'. If two files on the onther hand is copied at the same time there's no problem at all. The following code is on the main window: public partial class Form1 : Form { static string folder = specified path; static FileProcessor processor; public Form1() { InitializeComponent(); processor = new FileProcessor(); InitializeWatcher(); } static FileSystemWatcher watcher; static void InitializeWatcher() { watcher = new FileSystemWatcher(); watcher.Path = folder; watcher.Created += new FileSystemEventHandler(watcher_Created); watcher.EnableRaisingEvents = true; watcher.Filter = "*.XPD"; } static void watcher_Created(object sender, FileSystemEventArgs e) { processor.QueueInput(e.FullPath); } } As you can see the file's path is entered into a queue for processing which is on another class called FileProcessor: class FileProcessor { private Queue<string> workQueue; private Thread workerThread; private EventWaitHandle waitHandle; public FileProcessor() { workQueue = new Queue<string>(); waitHandle = new AutoResetEvent(true); } public void QueueInput(string filepath) { workQueue.Enqueue(filepath); if (workerThread == null) { workerThread = new Thread(new ThreadStart(Work)); workerThread.Start(); } else if (workerThread.ThreadState == ThreadState.WaitSleepJoin) { waitHandle.Set(); } } private void Work() { while (true) { string filepath = RetrieveFile(); if (filepath != null) ProcessFile(filepath); else waitHandle.WaitOne(); } } private string RetrieveFile() { if (workQueue.Count > 0) return workQueue.Dequeue(); else return null; } private void ProcessFile(string filepath) { string xName = Path.GetFileName(filepath); string fName = Path.GetFileNameWithoutExtension(filepath); string gfolder = specified path; bool fileInUse = true; string line; string[] itemArray = null; int i = 0; #region Declare Db variables //variables for each field of the database is created here #endregion #region Populate array while (fileInUse == true) { FileStream fs = new FileStream(filepath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite); StreamReader reader = new StreamReader(fs); itemArray = new string[75]; while (!reader.EndOfStream == true) { line = reader.ReadLine(); itemArray[i] = line; i++; } fs.Flush(); reader.Close(); reader.Dispose(); i = 0; fileInUse = false; } #endregion #region Assign Db variables //here all the variables get there values from the array #endregion #region MySql Connection //here the connection to mysql is made and the variables are inserted into the db #endregion #region Test and Move file if (System.IO.File.Exists(gfolder + xName)) { System.IO.File.Delete(gfolder + xName); } Directory.Move(filepath, gfolder + xName); #endregion } } The problem I get occurs in the Populate array region. I read alot of other threads and was lead to believe that by flushing the file stream would help... I am also thinking of adding a try..catch for if the file process was successful, the file is moved to gfolder and if it failed, moved to bfolder Any help would be awesome Tx

    Read the article

  • Resolving a Forward Declaration Issue Involving a State Machine in C++

    - by hypersonicninja
    I've recently returned to C++ development after a hiatus, and have a question regarding implementation of the State Design Pattern. I'm using the vanilla pattern, exactly as per the GoF book. My problem is that the state machine itself is based on some hardware used as part of an embedded system - so the design is fixed and can't be changed. This results in a circular dependency between two of the states (in particular), and I'm trying to resolve this. Here's the simplified code (note that I tried to resolve this by using headers as usual but still had problems - I've omitted them in this code snippet): #include <iostream> #include <memory> using namespace std; class Context { public: friend class State; Context() { } private: State* m_state; }; class State { public: State() { } virtual void Trigger1() = 0; virtual void Trigger2() = 0; }; class LLT : public State { public: LLT() { } void Trigger1() { new DH(); } void Trigger2() { new DL(); } }; class ALL : public State { public: ALL() { } void Trigger1() { new LLT(); } void Trigger2() { new DH(); } }; // DL needs to 'know' about DH. class DL : public State { public: DL() { } void Trigger1() { new ALL(); } void Trigger2() { new DH(); } }; class HLT : public State { public: HLT() { } void Trigger1() { new DH(); } void Trigger2() { new DL(); } }; class AHL : public State { public: AHL() { } void Trigger1() { new DH(); } void Trigger2() { new HLT(); } }; // DH needs to 'know' about DL. class DH : public State { public: DH () { } void Trigger1() { new AHL(); } void Trigger2() { new DL(); } }; int main() { auto_ptr<LLT> llt (new LLT); auto_ptr<ALL> all (new ALL); auto_ptr<DL> dl (new DL); auto_ptr<HLT> hlt (new HLT); auto_ptr<AHL> ahl (new AHL); auto_ptr<DH> dh (new DH); return 0; } The problem is basically that in the State Pattern, state transitions are made by invoking the the ChangeState method in the Context class, which invokes the constructor of the next state. Because of the circular dependency, I can't invoke the constructor because it's not possible to pre-define both of the constructors of the 'problem' states. I had a look at this article, and the template method which seemed to be the ideal solution - but it doesn't compile and my knowledge of templates is a rather limited... The other idea I had is to try and introduce a Helper class to the subclassed states, via multiple inheritance, to see if it's possible to specify the base class's constructor and have a reference to the state subclasse's constructor. But I think that was rather ambitious... Finally, would a direct implmentation of the Factory Method Design Pattern be the best way to resolve the entire problem?

    Read the article

  • Log4j: Events appear in the wrong logfile

    - by Markus
    Hi there! To be able to log and trace some events I've added a LoggingHandler class to my java project. Inside this class I'm using two different log4j logger instances - one for logging an event and one for tracing an event into different files. The initialization block of the class looks like this: public void initialize() { System.out.print("starting logging server ..."); // create logger instances logLogger = Logger.getLogger("log"); traceLogger = Logger.getLogger("trace"); // create pattern layout String conversionPattern = "%c{2} %d{ABSOLUTE} %r %p %m%n"; try { patternLayout = new PatternLayout(); patternLayout.setConversionPattern(conversionPattern); } catch (Exception e) { System.out.println("error: could not create logger layout pattern"); System.out.println(e); System.exit(1); } // add pattern to file appender try { logFileAppender = new FileAppender(patternLayout, logFilename, false); traceFileAppender = new FileAppender(patternLayout, traceFilename, false); } catch (IOException e) { System.out.println("error: could not add logger layout pattern to corresponding appender"); System.out.println(e); System.exit(1); } // add appenders to loggers logLogger.addAppender(logFileAppender); traceLogger.addAppender(traceFileAppender); // set logger level logLogger.setLevel(Level.INFO); traceLogger.setLevel(Level.INFO); // start logging server loggingServer = new LoggingServer(logLogger, traceLogger, serverPort, this); loggingServer.start(); System.out.println(" done"); } To make sure that only only thread is using the functionality of a logger instance at the same time each logging / tracing method calls the logging method .info() inside a synchronized-block. One example looks like this: public void logMessage(String message) { synchronized (logLogger) { if (logLogger.isInfoEnabled() && logFileAppender != null) { logLogger.info(instanceName + ": " + message); } } } If I look at the log files, I see that sometimes a event appears in the wrong file. One example: trace 10:41:30,773 11080 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1267093 to vehicle 1055293 (slaveControl 1) trace 10:41:30,784 11091 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1156513 to vehicle 1105792 (slaveControl 1) trace 10:41:30,796 11103 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1104306 to vehicle 1055293 (slaveControl 1) trace 10:41:30,808 11115 INFO masterControl(192.168.2.21): vehicle 1327879 was pushed to slave control 1 10:41:30,808 11115 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1101572 to vehicle 106741 (slaveControl 1) trace 10:41:30,820 11127 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1055293 to vehicle 1104306 (slaveControl 1) I think that the problem occures everytime two event happen at the same time (here: 10:41:30,808). Does anybody has an idea how to solve my problem? I already tried to add a sleep() after the method call, but that doesn't helped ... BR, Markus Edit: logtrace 11:16:07,75511:16:07,755 1129711297 INFOINFO masterControl(192.168.2.21): string broadcast message was pushed from 1291400 to vehicle 1138272 (slaveControl 1)masterControl(192.168.2.21): vehicle 1333770 was added to slave control 1 or log 11:16:08,562 12104 INFO 11:16:08,562 masterControl(192.168.2.21): string broadcast message was pushed from 117772 to vehicle 1217744 (slaveControl 1) 12104 INFO masterControl(192.168.2.21): vehicle 1169775 was pushed to slave control 1 Edit 2: It seems like the problem only occurs if logging methods are called from inside a RMI thread (my client / server exchange information using RMI connections). ... Edit 3: I solved the problem by myself: It seems like log4j is NOT completely thread-save. After synchronizing all log / trace methods using a separate object everything is working fine. Maybe the lib is writing the messages to a thread-unsafe buffer before writing them to file?

    Read the article

  • Drawing using Dynamic Array and Buffer Object

    - by user1905910
    I have a problem when creating the vertex array and the indices array. I don't know what really is the problem with the code, but I guess is something with the type of the arrays, can someone please give me a light on this? #define GL_GLEXT_PROTOTYPES #include<GL/glut.h> #include<iostream> using namespace std; #define BUFFER_OFFSET(offset) ((GLfloat*) NULL + offset) const GLuint numDiv = 2; const GLuint numVerts = 9; GLuint VAO; void display(void) { enum vertex {VERTICES, INDICES, NUM_BUFFERS}; GLuint * buffers = new GLuint[NUM_BUFFERS]; GLfloat (*squareVerts)[2] = new GLfloat[numVerts][2]; GLubyte * indices = new GLubyte[numDiv*numDiv*4]; GLuint delta = 80/numDiv; for(GLuint i = 0; i < numVerts; i++) { squareVerts[i][1] = (i/(numDiv+1))*delta; squareVerts[i][0] = (i%(numDiv+1))*delta; } for(GLuint i=0; i < numDiv; i++){ for(GLuint j=0; j < numDiv; j++){ //cada iteracao gera 4 pontos #define NUM_VERT(ii,jj) ((ii)*(numDiv+1)+(jj)) #define INDICE(ii,jj) (4*((ii)*numDiv+(jj))) indices[INDICE(i,j)] = NUM_VERT(i,j); indices[INDICE(i,j)+1] = NUM_VERT(i,j+1); indices[INDICE(i,j)+2] = NUM_VERT(i+1,j+1); indices[INDICE(i,j)+3] = NUM_VERT(i+1,j); } } glGenVertexArrays(1, &VAO); glBindVertexArray(VAO); glGenBuffers(NUM_BUFFERS, buffers); glBindBuffer(GL_ARRAY_BUFFER, buffers[VERTICES]); glBufferData(GL_ARRAY_BUFFER, sizeof(squareVerts), squareVerts, GL_STATIC_DRAW); glVertexPointer(2, GL_FLOAT, 0, BUFFER_OFFSET(0)); glEnableClientState(GL_VERTEX_ARRAY); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[INDICES]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glClear(GL_COLOR_BUFFER_BIT); glColor3f(1.0,1.0,1.0); glDrawElements(GL_POINTS, 16, GL_UNSIGNED_BYTE, BUFFER_OFFSET(0)); glutSwapBuffers(); } void init() { glClearColor(0.0, 0.0, 0.0, 0.0); gluOrtho2D((GLdouble) -1.0, (GLdouble) 90.0, (GLdouble) -1.0, (GLdouble) 90.0); } int main(int argv, char** argc) { glutInit(&argv, argc); glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE); glutInitWindowSize(500,500); glutInitWindowPosition(100,100); glutCreateWindow("myCode.cpp"); init(); glutDisplayFunc(display); glutMainLoop(); return 0; } Edit: The problem here is that drawing don't work at all. But I don't get any error, this just don't display what I want to display. Even if I put the code that make the vertices and put them in the buffers in a diferent function, this don't work. I just tried to do this: void display(void) { glBindVertexArray(VAO); glClear(GL_COLOR_BUFFER_BIT); glColor3f(1.0,1.0,1.0); glDrawElements(GL_POINTS, 16, GL_UNSIGNED_BYTE, BUFFER_OFFSET(0)); glutSwapBuffers(); } and I placed the rest of the code in display in another function that is called on the start of the program. But the problem still

    Read the article

  • Projections.count() and Projections.countDistinct() both result in the same query

    - by Kim L
    EDIT: I've edited this post completely, so that the new description of my problem includes all the details and not only what I previously considered relevant. Maybe this new description will help to solve the problem I'm facing. I have two entity classes, Customer and CustomerGroup. The relation between customer and customer groups is ManyToMany. The customer groups are annotated in the following way in the Customer class. @Entity public class Customer { ... @ManyToMany(mappedBy = "customers", fetch = FetchType.LAZY) public Set<CustomerGroup> getCustomerGroups() { ... } ... public String getUuid() { return uuid; } ... } The customer reference in the customer groups class is annotated in the following way @Entity public class CustomerGroup { ... @ManyToMany public Set<Customer> getCustomers() { ... } ... public String getUuid() { return uuid; } ... } Note that both the CustomerGroup and Customer classes also have an UUID field. The UUID is a unique string (uniqueness is not forced in the datamodel, as you can see, it is handled as any other normal string). What I'm trying to do, is to fetch all customers which do not belong to any customer group OR the customer group is a "valid group". The validity of a customer group is defined with a list of valid UUIDs. I've created the following criteria query Criteria criteria = getSession().createCriteria(Customer.class); criteria.setProjection(Projections.countDistinct("uuid")); criteria = criteria.createCriteria("customerGroups", "groups", Criteria.LEFT_JOIN); List<String> uuids = getValidUUIDs(); Criterion criterion = Restrictions.isNull("groups.uuid"); if (uuids != null && uuids.size() > 0) { criterion = Restrictions.or(criterion, Restrictions.in( "groups.uuid", uuids)); } criteria.add(criterion); When executing the query, it will result in the following SQL query select count(*) as y0_ from Customer this_ left outer join CustomerGroup_Customer customergr3_ on this_.id=customergr3_.customers_id left outer join CustomerGroup groups1_ on customergr3_.customerGroups_id=groups1_.id where groups1_.uuid is null or groups1_.uuid in ( ?, ? ) The query is exactly what I wanted, but with one exception. Since a Customer can belong to multiple CustomerGroups, left joining the CustomerGroup will result in duplicated Customer objects. Hence the count(*) will give a false value, as it only counts how many results there are. I need to get the amount of unique customers and this I expected to achieve by using the Projections.countDistinct("uuid"); -projection. For some reason, as you can see, the projection will still result in a count(*) query instead of the expected count(distinct uuid). Replacing the projection countDistinct with just count("uuid") will result in the exactly same query. Am I doing something wrong or is this a bug? === "Problem" solved. Reason: PEBKAC (Problem Exists Between Keyboard And Chair). I had a branch in my code and didn't realize that the branch was executed. That branch used rowCount() instead of countDistinct().

    Read the article

  • migrating Solaris to RH: network latency issue, tcp window size & other tcp parameters

    - by Bastien
    Hello I have a client/server app (Java) that I'm migrating from Solaris to RH Linux. since I started running it in RH, I noticed some issues related to latency. I managed to isolate the problem that looks like this: client sends 5 messages (32 bytes each) in a row (same application timestamp) to the server. server echos messages. client receives replies and prints round trip time for each msg. in Solaris, all is well: I get ALL 5 replies at the same time, roughly 80ms after having sent original messages (client & server are several thousands miles away from each other: my ping RTT is 80ms, all normal). in RH, first 3 messages are echoed normally (they arrive 80ms after they've been sent), however the following 2 arrive 80ms later (so total 160ms RTT). the pattern is always the same. clearly looked like a TCP problem. on my solaris box, I had previously configured the tcp stack with 2 specific options: disable nagle algorithm globally set tcp_deferred_acks_max to 0 on RH, it's not possible to disable nagle globally, but I disabled it on all of my apps' sockets (TCP_NODELAY). so I started playing with tcpdump (on the server machine), and compared both outputs: SOLARIS: 22 2.085645 client server TCP 56150 > 6006 [PSH, ACK] Seq=111 Ack=106 Win=66672 Len=22 "MSG_1 RCV" 23 2.085680 server client TCP 6006 > 56150 [ACK] Seq=106 Ack=133 Win=50400 Len=0 24 2.085908 client server TCP 56150 > 6006 [PSH, ACK] Seq=133 Ack=106 Win=66672 Len=22 "MSG_2 RCV" 25 2.085925 server client TCP 6006 > 56150 [ACK] Seq=106 Ack=155 Win=50400 Len=0 26 2.086175 client server TCP 56150 > 6006 [PSH, ACK] Seq=155 Ack=106 Win=66672 Len=22 "MSG_3 RCV" 27 2.086192 server client TCP 6006 > 56150 [ACK] Seq=106 Ack=177 Win=50400 Len=0 28 2.086243 server client TCP 6006 > 56150 [PSH, ACK] Seq=106 Ack=177 Win=50400 Len=21 "MSG_1 ECHO" 29 2.086440 client server TCP 56150 > 6006 [PSH, ACK] Seq=177 Ack=106 Win=66672 Len=22 "MSG_4 RCV" 30 2.086454 server client TCP 6006 > 56150 [ACK] Seq=127 Ack=199 Win=50400 Len=0 31 2.086659 server client TCP 6006 > 56150 [PSH, ACK] Seq=127 Ack=199 Win=50400 Len=21 "MSG_2 ECHO" 32 2.086708 client server TCP 56150 > 6006 [PSH, ACK] Seq=199 Ack=106 Win=66672 Len=22 "MSG_5 RCV" 33 2.086721 server client TCP 6006 > 56150 [ACK] Seq=148 Ack=221 Win=50400 Len=0 34 2.086947 server client TCP 6006 > 56150 [PSH, ACK] Seq=148 Ack=221 Win=50400 Len=21 "MSG_3 ECHO" 35 2.087196 server client TCP 6006 > 56150 [PSH, ACK] Seq=169 Ack=221 Win=50400 Len=21 "MSG_4 ECHO" 36 2.087500 server client TCP 6006 > 56150 [PSH, ACK] Seq=190 Ack=221 Win=50400 Len=21 "MSG_5 ECHO" 37 2.165390 client server TCP 56150 > 6006 [ACK] Seq=221 Ack=148 Win=66632 Len=0 38 2.166314 client server TCP 56150 > 6006 [ACK] Seq=221 Ack=190 Win=66588 Len=0 39 2.364135 client server TCP 56150 > 6006 [ACK] Seq=221 Ack=211 Win=66568 Len=0 REDHAT: 17 2.081163 client server TCP 55879 > 6006 [PSH, ACK] Seq=111 Ack=106 Win=66672 Len=22 "MSG_1 RCV" 18 2.081178 server client TCP 6006 > 55879 [ACK] Seq=106 Ack=133 Win=5888 Len=0 19 2.081297 server client TCP 6006 > 55879 [PSH, ACK] Seq=106 Ack=133 Win=5888 Len=21 "MSG_1 ECHO" 20 2.081711 client server TCP 55879 > 6006 [PSH, ACK] Seq=133 Ack=106 Win=66672 Len=22 "MSG_2 RCV" 21 2.081761 client server TCP 55879 > 6006 [PSH, ACK] Seq=155 Ack=106 Win=66672 Len=22 "MSG_3 RCV" 22 2.081846 server client TCP 6006 > 55879 [PSH, ACK] Seq=127 Ack=177 Win=5888 Len=21 "MSG_2 ECHO" 23 2.081995 server client TCP 6006 > 55879 [PSH, ACK] Seq=148 Ack=177 Win=5888 Len=21 "MSG_3 ECHO" 24 2.082011 client server TCP 55879 > 6006 [PSH, ACK] Seq=177 Ack=106 Win=66672 Len=22 "MSG_4 RCV" 25 2.082362 client server TCP 55879 > 6006 [PSH, ACK] Seq=199 Ack=106 Win=66672 Len=22 "MSG_5 RCV" 26 2.082377 server client TCP 6006 > 55879 [ACK] Seq=169 Ack=221 Win=5888 Len=0 27 2.171003 client server TCP 55879 > 6006 [ACK] Seq=221 Ack=148 Win=66632 Len=0 28 2.171019 server client TCP 6006 > 55879 [PSH, ACK] Seq=169 Ack=221 Win=5888 Len=42 "MSG_4 ECHO + MSG_5 ECHO" 29 2.257498 client server TCP 55879 > 6006 [ACK] Seq=221 Ack=211 Win=66568 Len=0 so, I got confirmation things are not working correctly for RH: packet 28 is sent TOO LATE, it looks like the server is waiting for packet 27's ACK before doing anything. seems to me it's the most likely reason... then I realized that the "Win" parameters are different on Solaris & RH dumps: 50400 on Solaris, only 5888 on RH. that's another hint... I read the doc about the slide window & buffer window, and played around with the rcvBuffer & sendBuffer in java on my sockets, but never managed to change this 5888 value to anything else (I checked each time directly with tcpdump). does anybody know how to do this ? I'm having a hard time getting definitive information, as in some cases there's "auto-negotiation" that I might need to bypass, etc... I eventually managed to get only partially rid of my initial problem by setting the "tcp_slow_start_after_idle" parameter to 0 on RH, but it did not change the "win" parameter at all. the same problem was there for the first 4 groups of 5 messages, with TCP retransmission & TCP Dup ACK in tcpdump, then the problem disappeared altogether for all following groups of 5 messages. It doesn't seem like a very clean and/or generic solution to me. I'd really like to reproduce the exact same conditions under both OSes. I'll keep researching, but any help from TCP gurus would be greatly appreciated ! thanks !

    Read the article

  • IE7 doesn't render part of page until the window resizes or switch between tabs

    - by BlackMael
    I have a problem with IE7. I have a fixed layout for keeping the header and a sidepanel fixed on a page leaving only the "main content" area switch can happily scroll it's content. This layout works perfectly fine for IE6 and IE8, but sometimes one page may start "hiding" the content that should be showing in the "main content" area. The page finishes loading just fine. For a split second IE7 will render the main content just fine and then it will immediately hide it from view.. somewhere.. It would also seem that it only experiences this problem when there is enough content to force the "main content" area to scroll. By resizing the window or switching to another open tab and back again will cause IE7 to show the page as it was intended. Note the same problem does occur with IE8 in compatibility mode, but the page is rendered correctly in IE8 mode. If need be I can attach the basic CSS styling I use, but I first want to see if this is a known issue with IE7. Does IE7 have issues with positioned layout and overflow scrolling that is sometimes likes to forgot to finish rendering the page correctly until some window redraw event forces to finish rendering? Please remember, this exact same layout is used across multiple pages in the site as it is set up in a master page. It is just (in this case) one page that is experiencing this problem. Other pages with the exact same layout do render correctly. Even if the main content is full enough to also scroll. UPDATE: A related question which doesn't have an answer at this point. LATE UPDATE: Adding example masterpage and css Please note this same layout is the same for all the pages in the application. My problem with IE7 only occurs on one such page. All other pages have happily render correctly in IE7. Just one page, using the exact same layout, has issues where it sometimes hides the content in the "work-space" div. The master page <%@ Master Language="VB" CodeFile="MasterPage.master.vb" Inherits="shared_templates_MasterPage" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> <link rel="Stylesheet" type="text/css" href="~/common/yui/2.7.0/build/reset-fonts/reset-fonts.css" runat="server" /> <link rel="Stylesheet" type="text/css" href="~/shared/css/layout.css" runat="server" /> <asp:ContentPlaceHolder ID="head" runat="server" /> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server" /> <div id="app-header"> </div> <div id="side-panel"> </div> <div id="work-space"> <asp:ContentPlaceHolder ID="WorkSpaceContentPlaceHolder" runat="server" /> </div> <div id="status-bar"> <asp:ContentPlaceHolder ID="StatusBarContentPlaceHolder" runat="server" /> </div> </form> </body> </html> The layout.css html { overflow: hidden; } body { overflow: hidden; padding: 0; margin: 0; width: 100%; height: 100%; background-color: white; } body, table, td, th, select, textarea, input { font-family: Tahoma, Arial, Sans-Serif; font-size: 9pt; } p { padding-left: 1em; margin-bottom: 1em; } #app-header { position: absolute; top: 0; left: 0; width: 100%; height: 80px; background-color: #dcdcdc; border-bottom: solid 4px #000; } #side-panel { position: absolute; top: 84px; left: 0px; bottom: 0px; overflow: auto; padding: 0; margin: 0; width: 227px; background-color: #AABCCA; border-right: solid 1px black; background-repeat: repeat-x; padding-top: 5px; } #work-space { position: absolute; top: 84px; left: 232px; right: 0px; padding: 0; margin: 0; bottom: 22px; overflow: auto; background-color: White; } #status-bar { position: absolute; height: 20px; left: 228px; right: 0px; padding: 0; margin: 0; bottom: 0px; border-top: solid 1px #c0c0c0; background-color: #f0f0f0; } The Default.aspx <%@ Page Title="Test" Language="VB" MasterPageFile="~/shared/templates/MasterPage.master" AutoEventWireup="false" CodeFile="Default.aspx.vb" Inherits="_Default" %> <asp:Content ID="WorkspaceContent" ContentPlaceHolderID="WorkSpaceContentPlaceHolder" Runat="Server"> Workspace <asp:ListView ID="DemoListView" runat="server" DataSourceID="DemoObjectDataSource" ItemPlaceholderID="DemoPlaceHolder"> <LayoutTemplate> <table style="border: 1px solid #a0a0a0; width: 600px"> <colgroup> <col width="80" /> <col /> <col width="80" /> <col width="120" /> </colgroup> <tbody> <asp:PlaceHolder ID="DemoPlaceHolder" runat="server" /> </tbody> </table> </LayoutTemplate> <ItemTemplate> <tr> <th><%#Eval("ID")%></th> <td><%#Eval("Name")%></td> <td><%#Eval("Size")%></td> <td><%#Eval("CreatedOn", "{0:yyyy-MM-dd HH:mm:ss}")%></td> </tr> </ItemTemplate> </asp:ListView> <asp:ObjectDataSource ID="DemoObjectDataSource" runat="server" OldValuesParameterFormatString="original_{0}" SelectMethod="GetData" TypeName="DemoLogic"> <SelectParameters> <asp:Parameter Name="path" Type="String" /> </SelectParameters> </asp:ObjectDataSource> </asp:Content> <asp:Content ID="StatusContent" ContentPlaceHolderID="StatusBarContentPlaceHolder" Runat="Server"> Ready OK. </asp:Content>

    Read the article

  • How to get full query string parameters not UrlDecoded

    - by developerit
    Introduction While developing Developer IT’s website, we came across a problem when the user search keywords containing special character like the plus ‘+’ char. We found it while looking for C++ in our search engine. The request parameter output in ASP.NET was “c “. I found it strange that it removed the ‘++’ and replaced it with a space… Analysis After a bit of Googling and Reflection, it turns out that ASP.NET calls UrlDecode on each parameters retreived by the Request(“item”) method. The Request.Params property is affected by this two since it mashes all QueryString, Forms and other collections into a single one. Workaround Finally, I solve the puzzle usign the Request.RawUrl property and parsing it with the same RegEx I use in my url re-writter. The RawUrl not affected by anything. As its name say it, it’s raw. Published on http://www.developerit.com/

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >