Search Results

Search found 9106 results on 365 pages for 'course'.

Page 330/365 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • .NET: Avoidance of custom exceptions by utilising existing types, but which?

    - by Mr. Disappointment
    Consider the following code (ASP.NET/C#): private void Application_Start(object sender, EventArgs e) { if (!SetupHelper.SetUp()) { throw new ShitHitFanException(); } } I've never been too hesitant to simply roll my own exception type, basically because I have found (bad practice, or not) that mostly a reasonable descriptive type name gives us enough as developers to go by in order to know what happened and why something might have happened. Sometimes the existing .NET exception types even accommodate these needs - regardless of the message. In this particular scenario, for demonstration purposes only, the application should die a horrible, disgraceful death should SetUp not complete properly (as dictated by its return value), but I can't find an already existing exception type in .NET which would seem to suffice; though, I'm sure one will be there and I simply don't know about it. Brad Abrams posted this article that lists some of the available exception types. I say some because the article is from 2005, and, although I try to keep up to date, it's a more than plausible assumption that more have been added to future framework versions that I am still unaware of. Of course, Visual Studio gives you a nicely formatted, scrollable list of exceptions via Intellisense - but even on analysing those, I find none which would seem to suffice for this situation... ApplicationException: ...when a non-fatal application error occurs The name seems reasonable, but the error is very definitely fatal - the app is dead. ExecutionEngineException: ...when there is an internal error in the execution engine of the CLR Again, sounds reasonable, superficially; but this has a very definite purpose and to help me out here certainly isn't it. HttpApplicationException: ...when there is an error processing an HTTP request Well, we're running an ASP.NET application! But we're also just pulling at straws here. InvalidOperationException: ...when a call is invalid for the current state of an instance This isn't right but I'm adding it to the list of 'possible should you put a gun to my head, yes'. OperationCanceledException: ...upon cancellation of an operation the thread was executing Maybe I wouldn't feel so bad using this one, but I'd still be hijacking the damn thing with little right. You might even ask why on earth I would want to raise an exception here but the idea is to find out that if I were to do so then do you know of an appropriate exception for such a scenario? And basically, to what extent can we piggy-back on .NET while keeping in line with rationality?

    Read the article

  • Reliable and fast way to convert a zillion ODT files in PDF?

    - by Marco Mariani
    I need to pre-produce a million or two PDF files from a simple template (a few pages and tables) with embedded fonts. Usually, I would stay low level in a case like this, and compose everything with a library like ReportLab, but I joined late in the project. Currently, I have a template.odt and use markers in the content.xml files to fill with data from a DB. I can smoothly create the ODT files, they always look rigth. For the ODT to PDF conversion, I'm using openoffice in server mode (and PyODConverter w/ named pipe), but it's not very reliable: in a batch of documents, there is eventually a point after which all the processed files are converted into garbage (wrong fonts and letters sprawled all over the page). Problem is not predictably reproducible (does not depend on the data), happens in OOo 2.3 and 3.2, in Ubuntu, XP, Server 2003 and Windows 7. My Heisenbug detector is ticking. I tried to reduce the size of batches and restarting OOo after each one; still, a small percentage of the documents are messed up. Of course I'll write about this on the Ooo mailing lists, but in the meanwhile, I have a delivery and lost too much time already. Where do I go? Completely avoid the ODT format and go for another template system. Suggestions? Anything that takes a few seconds to run is way too slow. OOo takes around a second and it sums to 15 days of processing time. I had to write a program for clustering the jobs over several clients. Keep the format but go for another tool/program for the conversion. Which one? There are many apps in the shareware or commercial repositories for windows, but trying each one is a daunting task. Some are too slow, some cannot be run in batch without buying it first, some cannot work from command line, etc. Open source tools tend not to reinvent the wheel and often depend on openoffice. Converting to an intermediate .DOC format could help to avoid the OOo bug, but it would double the processing time and complicate a task that is already too hairy. Try to produce the PDFs twice and compare them, discarding the whole batch if there's something wrong. Although the documents look equal, I know of no way to compare the binary content. Restart OOo after processing each document. it would take a lot more time to produce them it would lower the percentage of the wrong files, and make it very hard to identify them. Go for ReportLab and recreate the pages programmatically. This is the approach I'm going to try in a few minutes. Learn to properly format bulleted lists Thanks a lot.

    Read the article

  • Segmenting a double array of labels

    - by Ami
    The Problem: I have a large double array populated with various labels. Each element (cell) in the double array contains a set of labels and some elements in the double array may be empty. I need an algorithm to cluster elements in the double array into discrete segments. A segment is defined as a set of pixels that are adjacent within the double array and one label that all those pixels in the segment have in common. (Diagonal adjacency doesn't count and I'm not clustering empty cells). |-------|-------|------| | Jane | Joe | | | Jack | Jane | | |-------|-------|------| | Jane | Jane | | | | Joe | | |-------|-------|------| | | Jack | Jane | | | Joe | | |-------|-------|------| In the above arrangement of labels distributed over nine elements, the largest cluster is the “Jane” cluster occupying the four upper left cells. What I've Considered: I've considered iterating through every label of every cell in the double array and testing to see if the cell-label combination under inspection can be associated with a preexisting segment. If the element under inspection cannot be associated with a preexisting segment it becomes the first member of a new segment. If the label/cell combination can be associated with a preexisting segment it associates. Of course, to make this method reasonable I'd have to implement an elaborate hashing system. I'd have to keep track of all the cell-label combinations that stand adjacent to preexisting segments and are in the path of the incrementing indices that are iterating through the double array. This hash method would avoid having to iterate through every pixel in every preexisting segment to find an adjacency. Why I Don't Like it: As is, the above algorithm doesn't take into consideration the case where an element in the double array can be associated with two unique segments, one in the horizontal direction and one in the vertical direction. To handle these cases properly, I would need to implement a test for this specific case and then implement a method that will both associate the element under inspection with a segment and then concatenate the two adjacent identical segments. On the whole, this method and the intricate hashing system that it would require feels very inelegant. Additionally, I really only care about finding the large segments in the double array and I'm much more concerned with the speed of this algorithm than with the accuracy of the segmentation, so I'm looking for a better way. I assume there is some stochastic method for doing this that I haven't thought of. Any suggestions?

    Read the article

  • Optimizing spacing of mesh containing a given set of points

    - by Feynman
    I tried to summarize the this as best as possible in the title. I am writing an initial value problem solver in the most general way possible. I start with an arbitrary number of initial values at arbitrary locations (inside a boundary.) The first part of my program creates a mesh/grid (I am not sure which is the correct nuance), with N points total, that contains all the initial values. My goal is to optimize the mesh such that the spacing is as uniform as possible. My solver seems to work half decently (it needs some more obscure debugging that is not relevant here.) I am starting with one dimension. I intend to generalize the algorithm to an arbitrary number of dimensions once I get it working consistently. I am writing my code in fortran, but feel free to reply with pseudocode or the language of your choice. Allow me to elaborate with an example: Say I am working on a closed interval [1,10] xmin=1 xmax=10 Say I have 3 initial points: xmin, 5 and xmax num_ivc=3 known(num_ivc)=[xmin,5,xmax] //my arrays start at 1. Assume "known" starts sorted I store my mesh/grid points in an array called coord. Say I want 10 points total in my mesh/grid. N=10 coord(10) Remember, all this is arbitrary--except the variable names of course. The algorithm should set coord to {1,2,3,4,5,6,7,8,9,10} Now for a less trivial example: num_ivc=3 known(num_ivc)=[xmin,5.5,xmax or just num_ivc=1 known(num_ivc)=[5.5] Now, would you have 5 evenly spaced points on the interval [1, 5.5] and 5 evenly spaced points on the interval (5.5, 10]? But there is more space between 1 and 5.5 than between 5.5 and 10. So would you have 6 points on [1, 5.5] followed by 4 on (5.5 to 10]. The key is to minimize the difference in spacing. I have been working on this for 2 days straight and I can assure you it is a lot trickier than it sounds. I have written code that only works if N is large only works if N is small only works if it the known points are close together only works if it the known points are far apart only works if at least one of the known points is near a boundary only works if none of the known points are near a boundary So as you can see, I have coded the gamut of almost-solutions. I cannot figure out a way to get it to perform equally well in all possible scenarios (that is, create the optimum spacing.)

    Read the article

  • msysgit git-am can't apply it's own git format-patch sequence

    - by Andrian Nord
    I'm using msysgit git on windows to operate on central svn repository. I'm using git as I want to have it's awesome little local branches for everything and rebasing on each other. I also need to update from central repo often, so using separate svn/git is not an option. Problem is - git svn --help (man page) says that it is not a good idea to use git merge into master branch (which is set to track from svn's trunk) from local branches, as this will ruin the party and git svn dcommit would not work anymore. I know that it's not exactly true and you may use git merge if you are merging from branch which was properly rebased on master prior merge, but I'm trying to make it safer and actually use git format-patch and git am. We are using code review, so I'm making patches anyway. I also knew about git cherry-pick, but I want to just git am /reviewed/patches/dir/* without actually recalling what commits was corresponding to this patches (without reading patches, that is). So, what's wrong with git svn and git am? It's simple - git am for a few very hard points is doing CRLF into LF conversion for patches supplied (git-mailsplit is doing this, to be precise), if not rebasing. git format-patch is also producing proper (LF-ended) patches. As my repo is mostly CRLF (and it should remain so), patches are, obviously, failing due to wrong EOL. Converting diffs to CRLF and somehow hacking git am to prevent it from conversion is not working, too. It will fail if any file was removed or deleted - git apply will complain about expected /dev/null (but he got /dev/null^M). And if I'm applying it with git am --ignore-space-change --ignore-whitespace that it will commit LF endings straight to the index, which is also weird. I don't know if it will preserve over commiting into svn (via git svn dcommit) and checking it out and I don't want to try out. Of course, it's still possible to try hacking around patches to convert only actual diffs, but this is too much hacks for simple task. So, I wonder, is there really no established way to produce patches and apply them to the same repo on the same system? It just feels weird that msysgit can't apply it's own patches.

    Read the article

  • Job queue manager with RPC interface

    - by admr
    I need a job queue manager that I can control over the Internet. It should be able to execute and stop processes, check on their status (ideally notice and execute some code when a process exits), respond to commands and also be able to report back to a server. Background: I have a GWT application that allows to create jobs to execute on a cloud instance (currently EC2). I want to push a "job packet" (data for a process to operate on etc) to S3, start a Linux EC2 instance (or use one that's already running), and tell a job manager on the instance to execute that job (possibly parallel to other jobs). It should then pull the "job packet" from S3, run a process that operates on that data and report back to the server that is running the server part of my GWT application with some information (e.g. exit code, stdout, stderr). If I have to write e.g. stdour/err to a file from the process and read that file, that's OK too. I would really like the manager to be "close" to the processes it runs, meaning I want to avoid using something like Runtime.exec from the JDK. It seems like I would have to do that if I used Quartz for example. I'm fine with the calls in both directions being asynchronous. I'm fine with any reasonable technology for the calls as long as I can easily build an interface for that in my GWT server side (e.g. HTTP requests to a servlet over SSL would be nice and trivial). The job manager does not need to have a very sophisticated queueing system. Running several processes either sequentially or in parallel should be fine. Determining how much compute time a process received during its lifetime would be nice (AFAIK, this might be challenging). I did not yet find any existing software that does this, including http://java-source.net/open-source/job-schedulers. I suspect I might have to build an RPC interface (with authentication etc, of course) around a job manager; maybe use something like Apache Commons Exec. In that case, I would prefer Java or Python for the job manager part. I would be happy to hear suggestions for either the former or latter scenario!

    Read the article

  • Problem accessing updated variables within OnTouch

    - by Jay Smith
    I have an OnTouch and a setOnTouchListener that updates varibles which contain screen coord info. The problem is it doesnt seem to ever update them. On line 78, RGB.setText(test); it never changes from 0.0. If i were to move that line and the line above it into the onTouch it updates. any idea what is wrong? Thank you. package com.evankimia.huskybus; import com.test.huskybus.R; import android.app.Activity; import android.os.Bundle; import android.view.MotionEvent; import android.view.View; import android.view.View.OnTouchListener; import android.widget.TextView; public class HuskyBus extends Activity { TextView RGB; private CampusMap mCampusMap; private float startX = 0; //track x from one ACTION_MOVE to the next private float startY = 0; //track y from one ACTION_MOVE to the next float scrollByX = 0; //x amount to scroll by float scrollByY = 0; //y amount to scroll by /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); RGB = (TextView) findViewById(R.id.coordBox); mCampusMap = (CampusMap) findViewById(R.id.map); mCampusMap.setOnTouchListener(new OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { // TODO Auto-generated method stub switch (event.getAction()) { case MotionEvent.ACTION_DOWN: // Remember our initial down event location. startX = event.getRawX(); startY = event.getRawY(); break; case MotionEvent.ACTION_MOVE: float x = event.getRawX(); float y = event.getRawY(); // Calculate move update. This will happen many times // during the course of a single movement gesture. scrollByX = x - startX; //move update x increment scrollByY = y - startY; //move update y increment startX = x; //reset initial values to latest startY = y; mCampusMap.invalidate(); break; }//end switch return false; } ; }); //end onDraw? String test = "" + scrollByX; RGB.setText(test); } }

    Read the article

  • What is the Fastest Way to Check for a Keyword in a List of Keywords in Delphi?

    - by lkessler
    I have a small list of keywords. What I'd really like to do is akin to: case MyKeyword of 'CHIL': (code for CHIL); 'HUSB': (code for HUSB); 'WIFE': (code for WIFE); 'SEX': (code for SEX); else (code for everything else); end; Unfortunately the CASE statement can't be used like that for strings. I could use the straight IF THEN ELSE IF construct, e.g.: if MyKeyword = 'CHIL' then (code for CHIL) else if MyKeyword = 'HUSB' then (code for HUSB) else if MyKeyword = 'WIFE' then (code for WIFE) else if MyKeyword = 'SEX' then (code for SEX) else (code for everything else); but I've heard this is relatively inefficient. What I had been doing instead is: P := pos(' ' + MyKeyword + ' ', ' CHIL HUSB WIFE SEX '); case P of 1: (code for CHIL); 6: (code for HUSB); 11: (code for WIFE); 17: (code for SEX); else (code for everything else); end; This, of course is not the best programming style, but it works fine for me and up to now didn't make a difference. So what is the best way to rewrite this in Delphi so that it is both simple, understandable but also fast? (For reference, I am using Delphi 2009 with Unicode strings.) Followup: Toby recommended I simply use the If Then Else construct. Looking back at my examples that used a CASE statement, I can see how that is a viable answer. Unfortunately, my inclusion of the CASE inadvertently hid my real question. I actually don't care which keyword it is. That is just a bonus if the particular method can identify it like the POS method can. What I need is to know whether or not the keyword is in the set of keywords. So really I want to know if there is anything better than: if pos(' ' + MyKeyword + ' ', ' CHIL HUSB WIFE SEX ') > 0 then The If Then Else equivalent does not seem better in this case being: if (MyKeyword = 'CHIL') or (MyKeyword = 'HUSB') or (MyKeyword = 'WIFE') or (MyKeyword = 'SEX') then In Barry's comment to Kornel's question, he mentions the TDictionary Generic. I've not yet picked up on the new Generic collections and it looks like I should delve into them. My question here would be whether they are built for efficiency and how would using TDictionary compare in looks and in speed to the above two lines? In later profiling, I have found that the concatenation of strings as in: (' ' + MyKeyword + ' ') is VERY expensive time-wise and should be avoided whenever possible. Almost any other solution is better than doing this.

    Read the article

  • How to dispose of a NET COM interop object on Release()

    - by mhenry1384
    I have a COM object written in managed code (C++/CLI). I am using that object in standard C++. How do I force my COM object's destructor to be called immediately when the COM object is released? If that's not possible, call I have Release() call a MyDispose() method on my COM object? My code to declare the object (C++/CLI): [Guid("57ED5388-blahblah")] [InterfaceType(ComInterfaceType::InterfaceIsIDispatch)] [ComVisible(true)] public interface class IFoo { void Doit(); }; [Guid("417E5293-blahblah")] [ClassInterface(ClassInterfaceType::None)] [ComVisible(true)] public ref class Foo : IFoo { public: void MyDispose(); ~Foo() {MyDispose();} // This is never called !Foo() {MyDispose();} // This is called by the garbage collector. virtual ULONG Release() {MyDispose();} // This is never called virtual void Doit(); }; My code to use the object (native C++): #import "..\\Debug\\Foo.tlb" ... Bar::IFoo setup(__uuidof(Bar::Foo)); // This object comes from the .tlb. setup.Doit(); setup-Release(); // explicit release, not really necessary since Bar::IFoo's destructor will call Release(). If I put a destructor method on my COM object, it is never called. If I put a finalizer method, it is called when the garbage collector gets around to it. If I explicitly call my Release() override it is never called. I would really like it so that when my native Bar::IFoo object goes out of scope it automatically calls my .NET object's dispose code. I would think I could do it by overriding the Release(), and if the object count = 0 then call MyDispose(). But apparently I'm not overriding Release() correctly because my Release() method is never called. Obviously, I can make this happen by putting my MyDispose() method in the interface and requiring the people using my object to call MyDispose() before Release(), but it would be slicker if Release() just cleaned up the object. Is it possible to force the .NET COM object's destructor, or some other method, to be called immediately when a COM object is released? Googling on this issue gets me a lot of hits telling me to call System.Runtime.InteropServices.Marshal.ReleaseComObject(), but of course, that's how you tell .NET to release a COM object. I want COM Release() to Dispose of a .NET object.

    Read the article

  • [NEWVERSION]algorithm || method to write prog[UNSOLVED] [closed]

    - by fatai
    I am one of the computer science student. Everyone solve problem with a different or same method, ( but actually I dont know whether they use method or I dont know whether there are such common method to approach problem.) if there are common method, What is it ? If there are different method, which method are you using ? All teacher give us problem which in simple form sometimes, but they donot introduce any approach or method(s) so that we cannot make a decision to choose the method then apply that one to problem , afterward find solution then write code.No help from teacher , push us to find method to solve homework. Ex: my friend is using no method , he says "I start to construct algorithm while I try to write prog." I have found one method when I failed the course, More accurately, my method: When I counter problem in language , I will get more paper and then ; first, input/ output step ; my prog will take this / these there argument(s) and return namely X , ex : in c, input length is not known and at same type , so I must use pointer desired output is in form of package , so use structure second, execution part ; in that step , I am writing all step which are goes to final output ex : in python ; 1.) [ + , [- , 4 , [ * , 1 , 2 ]], 5] 2.) [ + , [- , 4 , 2 ],5 ] 3.) [ + , 2 , 5] 4.) 7 ==> return 7 third, I will write test code ex : in c++ input : append 3 4 5 6 vector_x remove 0 1 desired output vector_x holds : 5 6 now, my other question is ; What is/are other method(s) which have/has been; used to construct class :::: for c++ , python, java used to communicate classes / computers used for solving embedded system problem ::::: for c by other user? Some programmer uses generalized method without considering prog-language(java , perl .. ), what is this method ? Why I wonder , because I learn if you dont costruct algorithm on paper, you may achieve your goal. Like no money no lunch , I can say no algorithm no prog therefore , feel free when you write your own method , a way which is introduced by someone else but you are using and you find it is so efficient

    Read the article

  • OpenGL Vertex Buffer Object code giving bad output.

    - by Matthew Mitchell
    Hello. My Vertex Buffer Object code is supposed to render textures nicely but instead the textures are being rendered oddly with some triangle shapes. What happens - http://godofgod.co.uk/my_files/wrong.png What is supposed to happen - http://godofgod.co.uk/my_files/right.png This function creates the VBO and sets the vertex and texture coordinate data: extern "C" GLuint create_box_vbo(GLdouble size[2]){ GLuint vbo; glGenBuffers(1,&vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); GLsizeiptr data_size = 8*sizeof(GLdouble); GLdouble vertices[] = {0,0, 0,size[1], size[0],0, size[0],size[1]}; glBufferData(GL_ARRAY_BUFFER, data_size, vertices, GL_STATIC_DRAW); data_size = 8*sizeof(GLint); GLint textcoords[] = {0,0, 0,1, 1,0, 1,1}; glBufferData(GL_ARRAY_BUFFER, data_size, textcoords, GL_STATIC_DRAW); return vbo; } Here is some relavant code from another function which is supposed to draw the textures with the VBO. glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glColor4d(1,1,1,a/255); glBindTexture(GL_TEXTURE_2D, texture); glTranslated(offset[0],offset[1],0); glBindBuffer(GL_ARRAY_BUFFER, vbo); glVertexPointer(2, GL_DOUBLE, 0, 0); glEnableClientState(GL_VERTEX_ARRAY); glTexCoordPointer (2, GL_INT, 0, 0); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glDrawArrays(GL_TRIANGLES, 0, 3); glDrawArrays(GL_TRIANGLES, 1, 3); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_VERTEX_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, 0); I would have hoped for the code to use the first three coordinates (top-left,bottom-left,top-right) and the last three (bottom-left,top-right,bottom-right) to draw the triangles with the texture data correctly in the most efficient way. I don't see why triangles should make it more efficient but apparently that's the way to go. It, of-course, fails for some reason. I am asking what is broken but also am I going about it in the right way generally? Thank you.

    Read the article

  • What happens when several Java servlet apps running on the same port ?

    - by Frank
    Something strange happened to my servlets and I think I've figured out why, yet I'm more confused. I used Netbean6.7 to develop a Paypal IPN (Instant Payment Notification) message servlet, it listens on port 8080 by default for Paypal IPN messages. I used some sample Java code from it's web site, but when it ran, only about 1 in 10 messages came through, and they looked correct, but why 1 in 10 ? Not 100% or none ? So I asked some questions here and got some advices, one in particular points me to Google's App Engine, so I downloaded it and ran the demo guestbook while my IPN servlet is still running on Netbeans, the strange thing happened, after I entered "appengine-java-sdk-1.3.2\bin\dev_appserver.cmd appengine-java-sdk-1.3.2\demos\guestbook\war" from the command prompt, I went to the following url on my browser "http://localhost:8080/", I thought I would see the Google demo guestbook page, NO, what I saw was another servlet I developed 2 years ago : "Web Academy", online course registration app. How can that happen ? I never started it, and I haven't touch that project for years. I guess because it's also listening on port 8080, so now I understand why the IPN messages only came through 1 in 10 times, because another servlet was also listening on that port and could have got the messages intended for IPN, or some how those two servlets' processes mixed up and therefore couldn't respond to Paypal properly, and failed. In order to verify some of my guesses, I turn off Netbeans, and ran the Google guestbook again at the prompt, this time on my browser http://localhost:8080/ points to the demo guestbook page. My Urls look like this : [A] Paypal IPN : http://localhost:8080/PayPal_App/PayPal_Servlet [B] Web Academy : http://localhost:8080/ So now, my questions are : <1> Why the "Web Academy" servlet was auto started when I ran the Paypal servlet ? <2> If I change the IPN listening port to 8083, would that mean I can run both of them on my PC at the same time without affecting each other ? <3> But I still don't understand, [A] and [B] look different, if a page for [A] is refreshed, it should show the Paypal content, and another page looking at [B] should show the Web Academy content, and that's exactly what happens when I started Netbeans to run the Paypal servlet, both pages show their respective content correctly side by side without interfering with each other, how come the IPN messages couldn't get through 100% of the time ? <4> In Netbeans how to assign 8080 to servlet [A] and assign port 8083 to servlet [B] ? <5> How to turn off auto start of Web Academy by Netbeans ?

    Read the article

  • Calculating a Sample Covariance Matrix for Groups with plyr

    - by John A. Ramey
    I'm going to use the sample code from http://gettinggeneticsdone.blogspot.com/2009/11/split-apply-and-combine-in-r-using-plyr.html for this example. So, first, let's copy their example data: mydata=data.frame(X1=rnorm(30), X2=rnorm(30,5,2), SNP1=c(rep("AA",10), rep("Aa",10), rep("aa",10)), SNP2=c(rep("BB",10), rep("Bb",10), rep("bb",10))) I am going to ignore SNP2 in this example and just pretend the values in SNP1 denote group membership. So then, I may want some summary statistics about each group in SNP1: "AA", "Aa", "aa". Then if I want to calculate the means for each variable, it makes sense (modifying their code slightly) to use: > ddply(mydata, c("SNP1"), function(df) data.frame(meanX1=mean(df$X1), meanX2=mean(df$X2))) SNP1 meanX1 meanX2 1 aa 0.05178028 4.812302 2 Aa 0.30586206 4.820739 3 AA -0.26862500 4.856006 But what if I want the sample covariance matrix for each group? Ideally, I would like a 3D array, where the I have the covariance matrix for each group, and the third dimension denotes the corresponding group. I tried a modified version of the previous code and got the following results that have convinced me that I'm doing something wrong. > daply(mydata, c("SNP1"), function(df) cov(cbind(df$X1, df$X2))) , , = 1 SNP1 1 2 aa 1.4961210 -0.9496134 Aa 0.8833190 -0.1640711 AA 0.9942357 -0.9955837 , , = 2 SNP1 1 2 aa -0.9496134 2.881515 Aa -0.1640711 2.466105 AA -0.9955837 4.938320 I was thinking that the dim() of the 3rd dimension would be 3, but instead, it is 2. Really this is a sliced up version of the covariance matrix for each group. If we manually compute the sample covariance matrix for aa, we get: [,1] [,2] [1,] 1.4961210 -0.9496134 [2,] -0.9496134 2.8815146 Using plyr, the following gives me what I want in list() form: > dlply(mydata, c("SNP1"), function(df) cov(cbind(df$X1, df$X2))) $aa [,1] [,2] [1,] 1.4961210 -0.9496134 [2,] -0.9496134 2.8815146 $Aa [,1] [,2] [1,] 0.8833190 -0.1640711 [2,] -0.1640711 2.4661046 $AA [,1] [,2] [1,] 0.9942357 -0.9955837 [2,] -0.9955837 4.9383196 attr(,"split_type") [1] "data.frame" attr(,"split_labels") SNP1 1 aa 2 Aa 3 AA But like I said earlier, I would really like this in a 3D array. Any thoughts on where I went wrong with daply() or suggestions? Of course, I could typecast the list from dlply() to a 3D array, but I'd rather not do this because I will be repeating this process many times in a simulation. As a side note, I found one method (http://www.mail-archive.com/[email protected]/msg86328.html) that provides the sample covariance matrix for each group, but the outputted object is bloated. Thanks in advance.

    Read the article

  • OpenGL suppresses exceptions in MFC dialog-based application

    - by Mikhail
    Hello. I have an MFC-driven dialog-based application created with MSVS2005. Here is my problem step by step. I have button on my dialog and corresponding click-handler with code like this: int* i = 0; *i = 3; I'm running debug version of program and when I click on the button, Visual Studio catches focus and alerts "Access violation writing location" exception, program cannot recover from the error and all I can do is to stop debugging. And this is the right behavior. Now I add some OpenGL initialization code in the OnInitDialog() method: HDC DC = GetDC(GetSafeHwnd()); static PIXELFORMATDESCRIPTOR pfd = { sizeof(PIXELFORMATDESCRIPTOR), // size of this pfd 1, // version number PFD_DRAW_TO_WINDOW | // support window PFD_SUPPORT_OPENGL | // support OpenGL PFD_DOUBLEBUFFER, // double buffered PFD_TYPE_RGBA, // RGBA type 24, // 24-bit color depth 0, 0, 0, 0, 0, 0, // color bits ignored 0, // no alpha buffer 0, // shift bit ignored 0, // no accumulation buffer 0, 0, 0, 0, // accum bits ignored 32, // 32-bit z-buffer 0, // no stencil buffer 0, // no auxiliary buffer PFD_MAIN_PLANE, // main layer 0, // reserved 0, 0, 0 // layer masks ignored }; int pixelformat = ChoosePixelFormat(DC, &pfd); SetPixelFormat(DC, pixelformat, &pfd); HGLRC hrc = wglCreateContext(DC); ASSERT(hrc != NULL); wglMakeCurrent(DC, hrc); Of course this is not exactly what I do, it is the simplified version of my code. Well now the strange things begin to happen: all initialization is fine, there are no errors in OnInitDialog(), but when I click the button... no exception is thrown. Nothing happens. At all. If I set a break-point at the *i = 3; and press F11 on it, the handler-function halts immediately and focus is returned to the application, which continue to work well. I can click button again and the same thing will happen. It seems like someone had handled occurred exception of access violation and silently returned execution into main application message-receiving cycle. If I comment the line wglMakeCurrent(DC, hrc);, all works fine as before, exception is thrown and Visual Studio catches it and shows window with error message and program must be terminated afterwards. I experience this problem under Windows 7 64-bit, NVIDIA GeForce 8800 with latest drivers (of 11.01.2010) available at website installed. My colleague has Windows Vista 32-bit and has no such problem - exception is thrown and application crashes in both cases. Well, hope good guys will help me :) PS The problem originally where posted under this topic.

    Read the article

  • Write Scheme data structures so they can be eval-d back in, or alternative

    - by Jesse Millikan
    I'm writing an application (A juggling pattern animator) in PLT Scheme that accepts Scheme expressions as values for some fields. I'm attempting to write a small text editor that will let me "explode" expressions into expressions that can still be eval'd but contain the data as literals for manual tweaking. For example, (4hss->sexp "747") is a function call that generates a legitimate pattern. If I eval and print that, it becomes (((7 3) - - -) (- - (4 2) -) (- (7 2) - -) (- - - (7 1)) ((4 0) - - -) (- - (7 0) -) (- (7 2) - -) (- - - (4 3)) ((7 3) - - -) (- - (7 0) -) (- (4 1) - -) (- - - (7 1))) which can be "read" as a string, but will not "eval" the same as the function. For this statement, of course, what I need would be as simple as (quote (((7 3... but other examples are non-trivial. This one, for example, contains structs which print as vectors: pair-of-jugglers ; --> (#(struct:hand #(struct:position -0.35 2.0 1.0) #(struct:position -0.6 2.05 1.1) 1.832595714594046) #(struct:hand #(struct:position 0.35 2.0 1.0) #(struct:position 0.6 2.0500000000000003 1.1) 1.308996938995747) #(struct:hand #(struct:position 0.35 -2.0 1.0) #(struct:position 0.6 -2.05 1.1) -1.3089969389957472) #(struct:hand #(struct:position -0.35 -2.0 1.0) #(struct:position -0.6 -2.05 1.1) -1.8325957145940461)) I've thought of at least three possible solutions, none of which I like very much. Solution A is to write a recursive eval-able output function myself for a reasonably large subset of the values that I might be using. There (probably...) won't be any circular references by the nature of the data structures used, so that wouldn't be such a long job. The output would end up looking like `(((3 0) (... ; ex 1 `(,(make-hand (make-position ... ; ex 2 Or even worse if I could't figure out how to do it properly with quasiquoting. Solution B would be to write out everything as (read (open-input-string "(big-long-s-expression)")) which, technically, solves the problem I'm bringing up but is... ugly. Solution C might be a different approach of giving up eval and using only read for parsing input, or an uglier approach where the s-expression is used as directly data if eval fails, but those both seem unpleasant compared to using scheme values directly. Undiscovered Solution D would be a PLT Scheme option, function or library I haven't located that would match Solution A. Help me out before I start having bad recursion dreams again.

    Read the article

  • Dependency Injection and Unit of Work pattern

    - by sunwukung
    I have a dilemma. I've used DI (read: factory) to provide core components for a homebrew ORM. The container provides database connections, DAO's,Mappers and their resultant Domain Objects on request. Here's a basic outline of the Mappers and Domain Object classes class Mapper{ public function __constructor($DAO){ $this->DAO = $DAO; } public function load($id){ if(isset(Monitor::members[$id]){ return Monitor::members[$id]; $values = $this->DAO->selectStmt($id); //field mapping process omitted for brevity $Object = new Object($values); return $Object; } } class User(){ public function setName($string){ $this->name = $string; //mark modified by means fair or foul } } The ORM also contains a class (Monitor) based on the Unit of Work pattern i.e. class Monitor(){ private static array modified; private static array dirty; public function markClean($class); public function markModified($class); } The ORM class itself simply co-ordinates resources extracted from the DI container. So, to instantiate a new User object: $Container = new DI_Container; $ORM = new ORM($Container); $User = $ORM->load('user',1); //at this point the container instantiates a mapper class //and passes a database connection to it via the constructor //the mapper then takes the second argument and loads the user with that id $User->setName('Rumpelstiltskin');//at this point, User must mark itself as "modified" My question is this. At the point when a user sets values on a Domain Object class, I need to mark the class as "dirty" in the Monitor class. I have one of three options as I can see it 1: Pass an instance of the Monitor class to the Domain Object. I noticed this gets marked as recursive in FirePHP - i.e. $this-Monitor-markModified($this) 2: Instantiate the Monitor directly in the Domain Object - does this break DI? 3: Make the Monitor methods static, and call them from inside the Domain Object - this breaks DI too doesn't it? What would be your recommended course of action (other than use an existing ORM, I'm doing this for fun...)

    Read the article

  • Programming graphics and sound on PC - Total newbie questions, and lots of them!

    - by Russel
    Hello, This isn't exactly specifically a programming question (or is it?) but I was wondering: How are graphics and sound processed from code and output by the PC? My guess for graphics: There is some reserved memory space somewhere that holds exactly enough room for a frame of graphics output for your monitor. IE: 800 x 600, 24 bit color mode == 800x600x3 = ~1.4MB memory space Between each refresh, the program writes video data to this space. This action is completed before the monitor refresh. Assume a simple 2D game: the graphics data is stored in machine code as many bytes representing color values. Depending on what the program(s) being run instruct the PC, the processor reads the appropriate data and writes it to the memory space. When it is time for the monitor to refresh, it reads from each memory space byte-for-byte and activates hardware depending on those values for each color element of each pixel. All of this of course happens crazy-fast, and repeats x times a second, x being the monitor's refresh rate. I've simplified my own likely-incorrect explanation by avoiding talk of double buffering, etc Here are my questions: a) How close is the above guess (the three steps)? b) How could one incorporate graphics in pure C++ code? I assume the practical thing that everyone does is use a graphics library (SDL, OpenGL, etc), but, for example, how do these libraries accomplish what they do? Would manual inclusion of graphics in pure C++ code (say, a 2D spite) involve creating a two-dimensional array of bit values (or three dimensional to include multiple RGB values per pixel)? Is this how it would be done waaay back in the day? c) Also, continuing from above, do libraries such as SDL etc that use bitmaps actual just build the bitmap/etc files into machine code of the executable and use them as though they were build in the same matter mentioned in question b above? d) In my hypothetical step 3 above, is there any registers involved? Like, could you write some byte value to some register to output a single color of one byte on the screen? Or is it purely dedicated memory space (=RAM) + hardware interaction? e) Finally, how is all of this done for sound? (I have no idea :) )

    Read the article

  • Why aren't min-width and max-width working as I expect?

    - by Nathan Long
    I'm trying to adjust a CSS page layout using min-width and max-width. To simplify the problem, I made this test page. I'm trying it out in the latest versions of Firefox and Chrome with the same results. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Testing min-width and max-width</title> <style type="text/css"> div{float: left; max-width: 400px; min-width: 200px;} div.a{background: orange;} div.b{background: gray;} </style> </head> <body> <div class="a"> (Giant block of filler text here) </div> <div class="b"> (Giant block of filler text here) </div> </body> </html> Here's what I expect to happen: With the browser maximized, the divs sit side by side, each 400px wide: their maximum width Shrink the browser window, and they both shrink to 200px: their minimum width Further shrinking the browser has no effect on them Here's what actually happens, starting at step 2: Shrink the browser window, and as soon as they can't sit side-by-side at their max width, the second div drops below the first Further shrinking the browser makes them get narrower and narrower, as small as I can make the window So here's are my questions: What does max-width mean if the element will sooner hop down in the layout than go lower than its maximum width? What does min-width mean if the element will happily get narrower than that if the browser window keeps shrinking? Is there any way to achieve what I want: have these elements sit side-by-side, happily shrinking until they reach 200px each, and only then adjust the layout so that the second one drops down? And of course... What am I doing wrong?

    Read the article

  • Available Coroutine Libraries in Java

    - by JUST MY correct OPINION
    I would like to do some stuff in Java that would be clearer if written using concurrent routines, but for which full-on threads are serious overkill. The answer, of course, is the use of coroutines, but there doesn't appear to be any coroutine support in the standard Java libraries and a quick Google on it brings up tantalising hints here or there, but nothing substantial. Here's what I've found so far: JSIM has a coroutine class, but it looks pretty heavyweight and conflates, seemingly, with threads at points. The point of this is to reduce the complexity of full-on threading, not to add to it. Further I'm not sure that the class can be extracted from the library and used independently. Xalan has a coroutine set class that does coroutine-like stuff, but again it's dubious if this can be meaningfully extracted from the overall library. It also looks like it's implemented as a tightly-controlled form of thread pool, not as actual coroutines. There's a Google Code project which looks like what I'm after, but if anything it looks more heavyweight than using threads would be. I'm basically nervous of something that requires software to dynamically change the JVM bytecode at runtime to do its work. This looks like overkill and like something that will cause more problems than coroutines would solve. Further it looks like it doesn't implement the whole coroutine concept. By my glance-over it gives a yield feature that just returns to the invoker. Proper coroutines allow yields to transfer control to any known coroutine directly. Basically this library, heavyweight and scary as it is, only gives you support for iterators, not fully-general coroutines. The promisingly-named Coroutine for Java fails because it's a platform-specific (obviously using JNI) solution. And that's about all I've found. I know about the native JVM support for coroutines in the Da Vinci Machine and I also know about the JNI continuations trick for doing this. These are not really good solutions for me, however, as I would not necessarily have control over which VM or platform my code would run on. (Indeed any bytecode manipulation system would suffer similar problems -- it would be best were this pure Java if possible. Runtime bytecode manipulation would restrict me from using this on Android, for example.) So does anybody have any pointers? Is this even possible? If not, will it be possible in Java 7? Edited to add: Just to ensure that confusion is contained, this is a related question to my other one, but not the same. This one is looking for an existing implementation in a bid to avoid reinventing the wheel unnecessarily. The other one is a question relating to how one would go about implementing coroutines in Java should this question prove unanswerable. The intent is to keep different questions on different threads.

    Read the article

  • XML validation in Java - why does this fail?

    - by jd
    hi, first time dealing with xml, so please be patient. the code below is probably evil in a million ways (I'd be very happy to hear about all of them), but the main problem is of course that it doesn't work :-) public class Test { private static final String JSDL_SCHEMA_URL = "http://schemas.ggf.org/jsdl/2005/11/jsdl"; private static final String JSDL_POSIX_APPLICATION_SCHEMA_URL = "http://schemas.ggf.org/jsdl/2005/11/jsdl-posix"; public static void main(String[] args) { System.out.println(Test.createJSDLDescription("/bin/echo", "hello world")); } private static String createJSDLDescription(String execName, String args) { Document jsdlJobDefinitionDocument = getJSDLJobDefinitionDocument(); String xmlString = null; // create the elements Element jobDescription = jsdlJobDefinitionDocument.createElement("JobDescription"); Element application = jsdlJobDefinitionDocument.createElement("Application"); Element posixApplication = jsdlJobDefinitionDocument.createElementNS(JSDL_POSIX_APPLICATION_SCHEMA_URL, "POSIXApplication"); Element executable = jsdlJobDefinitionDocument.createElement("Executable"); executable.setTextContent(execName); Element argument = jsdlJobDefinitionDocument.createElement("Argument"); argument.setTextContent(args); //join them into a tree posixApplication.appendChild(executable); posixApplication.appendChild(argument); application.appendChild(posixApplication); jobDescription.appendChild(application); jsdlJobDefinitionDocument.getDocumentElement().appendChild(jobDescription); DOMSource source = new DOMSource(jsdlJobDefinitionDocument); validateXML(source); try { Transformer transformer = TransformerFactory.newInstance().newTransformer(); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); StreamResult result = new StreamResult(new StringWriter()); transformer.transform(source, result); xmlString = result.getWriter().toString(); } catch (Exception e) { e.printStackTrace(); } return xmlString; } private static Document getJSDLJobDefinitionDocument() { DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = null; try { builder = factory.newDocumentBuilder(); } catch (Exception e) { e.printStackTrace(); } DOMImplementation domImpl = builder.getDOMImplementation(); Document theDocument = domImpl.createDocument(JSDL_SCHEMA_URL, "JobDefinition", null); return theDocument; } private static void validateXML(DOMSource source) { try { URL schemaFile = new URL(JSDL_SCHEMA_URL); Sche maFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); Schema schema = schemaFactory.newSchema(schemaFile); Validator validator = schema.newValidator(); DOMResult result = new DOMResult(); validator.validate(source, result); System.out.println("is valid"); } catch (Exception e) { e.printStackTrace(); } } } it spits out a somewhat odd message: org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'JobDescription'. One of '{"http://schemas.ggf.org/jsdl/2005/11/jsdl":JobDescription}' is expected. Where am I going wrong here? Thanks a lot

    Read the article

  • Is there a way to split a widescreen monitor in to two or more virtual monitors?

    - by Mike Thompson
    Like most developers I have grown to love dual monitors. I won't go into all the reasons for their goodness; just take it as a given. However, they are not perfect. You can never seem to line them up "just right". You always end up with the monitors at slight funny angles. And of course the bezel always gets in the way. And this is with identical monitors. The problem is much worse with different monitors -- VMWare's multi monitor feature won't even work with monitors of differnt resolutions. When you use multiple monnitors, one of them becomes your primary monitor of focus. Your focus may flip from one monitor to the other, but at any point in time you are usually focusing on only one monitor. There are exceptions to this (WinDiff, Excel), but this is generally the case. I suggest that having a single large monitor with all the benefits of multiple smaller monitors would be a better solution. Wide screen monitors are fantastic, but it is hard to use all the space efficiently. If you are writing code you are generally working on the left-hand side of the window. If you maximize an editor on a wide-screen monitor the right-hand side of the window will be a sea of white. Programs like WinSplit Revolution will help to organise your windows, but this is really just addressing the symptom, not the problem. Even with WinSplit Revolution, when you maximise a window it will take up the whole screen. You can't lock a window into a specific section of the screen. This is where virtual monitors comes in. What would be really nice is a video driver that sits on top of the existing driver, but allows a single monitor to be virtualised into multiple monitors. Control Panel would see your single physical monitor as two or more virtual monitors. The software could even support a virtual bezel to emphasise what is happening, or you could opt for seamless mode. Programs like WinSplit Revolution and UltraMon would still work. This virtual video driver would allow you to slice & dice your physical monitor into as many virtual monitors as you want. Does anybody know if such software exists? If not, are there any budding Windows display driver guru's out there willing to take up the challenge? I am not after the myriad of virtual desktop/window manager programs that are available. I get frustrated with these programs. They seem good at first but they usually have some strange behaviour and don't work well with other programs (such as WinSplit Revolution). I want the real thing!

    Read the article

  • Included php file calling Javascript function

    - by Illes Peter
    Hi there! Here's the deal. I've got index.php which links to an internal JS file in it's header. index.php then includes another .php file, which outputs this: + add file. addFile() is a Javascript function defined in the external JS file. By doing this nothing happens, the included php does not "see" the JS function. Encapsulating the JS in the included PHP makes it all work. But I don't want to do it that way. Any ideas? EDIT: here's the source <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Archie</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <link rel="stylesheet" href="/screen.css" type="text/css" media="screen"/> <script src="/lib/js/archie.js" type="text/javascript"></script> </head> <body> ... ... //included php starts here <form action="/lib/course.php" method="post"> <fieldset> <div id="addFileLocation"></div> <a href="#" onClick="addFile()">+ add file</a> <input type="hidden" id="addFileCount" value="0"/> </fieldset> </form> //ends here ... ... </body> </html> and the js: <script type="text/javascript"> //Dynamically add form fields //add file browser function addFile() { var location = document.getElementById('addFileLocation'); var num = document.getElementById('addFileCount'); var newnum = (document.getElementById('addFileCount').value -1)+ 2; num.value = newnum; var newname = 'addFile_'+newnum; var newelement = document.createElement('input'); newelement.setAttribute('name',newname); newelement.setAttribute('type','file'); location.appendChild(newelement); } </script>

    Read the article

  • drag to pan on an UserControl

    - by Matías
    Hello, I'm trying to build my own "PictureBox like" control adding some functionalities. For example, I want to be able to pan over a big image by simply clicking and dragging with the mouse. The problem seems to be on my OnMouseMove method. If I use the following code I get the drag speed and precision I want, but of course, when I release the mouse button and try to drag again the image is restored to its original position. using System.Drawing; using System.Windows.Forms; namespace Testing { public partial class ScrollablePictureBox : UserControl { private Image image; private bool centerImage; public Image Image { get { return image; } set { image = value; Invalidate(); } } public bool CenterImage { get { return centerImage; } set { centerImage = value; Invalidate(); } } public ScrollablePictureBox() { InitializeComponent(); SetStyle(ControlStyles.AllPaintingInWmPaint | ControlStyles.OptimizedDoubleBuffer, true); Image = null; AutoScroll = true; AutoScrollMinSize = new Size(0, 0); } private Point clickPosition; private Point scrollPosition; protected override void OnMouseDown(MouseEventArgs e) { base.OnMouseDown(e); clickPosition.X = e.X; clickPosition.Y = e.Y; } protected override void OnMouseMove(MouseEventArgs e) { base.OnMouseMove(e); if (e.Button == MouseButtons.Left) { scrollPosition.X = clickPosition.X - e.X; scrollPosition.Y = clickPosition.Y - e.Y; AutoScrollPosition = scrollPosition; } } protected override void OnPaint(PaintEventArgs e) { base.OnPaint(e); e.Graphics.FillRectangle(new Pen(BackColor).Brush, 0, 0, e.ClipRectangle.Width, e.ClipRectangle.Height); if (Image == null) return; int centeredX = AutoScrollPosition.X; int centeredY = AutoScrollPosition.Y; if (CenterImage) { //Something not relevant } AutoScrollMinSize = new Size(Image.Width, Image.Height); e.Graphics.DrawImage(Image, new RectangleF(centeredX, centeredY, Image.Width, Image.Height)); } } } But if I modify my OnMouseMove method to look like this: protected override void OnMouseMove(MouseEventArgs e) { base.OnMouseMove(e); if (e.Button == MouseButtons.Left) { scrollPosition.X += clickPosition.X - e.X; scrollPosition.Y += clickPosition.Y - e.Y; AutoScrollPosition = scrollPosition; } } ... you will see that the dragging is not smooth as before, and sometimes behaves weird (like with lag or something). What am I doing wrong? I've also tried removing all "base" calls on a desperate movement to solve this issue, haha, but again, it didn't work. Thanks for your time.

    Read the article

  • Do Websites need Local Databases Anymore?

    - by viatropos
    If there's a better place to ask this, please let me know. Every time I build a new website/blog/shopping-cart/etc., I keep trying to do the following: Extract out common functionality into reusable code (Rubygems and jQuery plugins mostly) If possible, convert that gem into a small service so I never have to deal with a database for the objects involved (by service, I mean something lean and mean, usually built with the Sinatra Web Framework with a few core models). My assumption is, if I can remove dependencies on local databases, that will make it easier and more scalable in the long run (scalable in terms of reusability and manageability, not necessarily database/performance). I'm not sure if that's a good or bad assumption yet. What do you think? I've made this assumption because of the following reason: Most serious database/model functionality has been built on the internet somewhere. Just to name a few: Social Network API: Facebook Messaging API: Twitter Mailing API: Google Event API: Eventbrite Shopping API: Shopify Comment API: Disqus Form API: Wufoo Image API: Picasa Video API: Youtube ... Each of those things are fairly complicated to build from scratch and to make as optimized, simple, and easy to use as those companies have made them. So if I build an app that shows pictures (picasa) on an Event page (eventbrite), and you can see who joined the event (facebook events), and send them emails (google apps api), and have them fill out monthly surveys (wufoo), and watch a video when they're done (youtube), all integrated into a custom, easy to use website, and I can do that without ever creating a local database, is that a good thing? I ask because there's two things missing from the puzzle that keep forcing me to create that local database: Post API RESTful/Pretty Url API While there's plenty of Blogging systems and APIs for them, there is no one place where you can just write content and have it part of some massive thing. For every app, I have to use code for creating pretty/restful urls, and that saves posts. But it seems like that should be a service! Question is, is that what the website is? ...That place to integrate the worlds services for my specific cause... and, sigh, to store posts that only my site has access to. Will everyone always need "their own blog"? Why not just have a profile and write lots of content on an established platform like StackOverflow or Facebook? ... That way I can write apps entirely without a database and know that I'm doing it right. Note: Of course at some point you'd need a database, if you were doing something unique or new. But for the case where you're just rewiring information or creating things like videos, events, and products, is it really necessary anymore??

    Read the article

  • How to maxmise the largest contiguous block of memory in the Large Object Heap

    - by Unsliced
    The situation is that I am making a WCF call to a remote server which is returns an XML document as a string. Most of the time this return value is a few K, sometimes a few dozen K, very occasionally a few hundred K, but very rarely it could be several megabytes (first problem is that there is no way for me to know). It's these rare occasions that are causing grief. I get a stack trace that starts: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Xml.BufferBuilder.AddBuffer() at System.Xml.BufferBuilder.AppendHelper(Char* pSource, Int32 count) at System.Xml.BufferBuilder.Append(Char[] value, Int32 start, Int32 count) at System.Xml.XmlTextReaderImpl.ParseText() at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlTextReader.Read() at System.Xml.XmlReader.ReadElementString() at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderMDRQuery.Read2_getMarketDataResponse() at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer2.Deserialize(XmlSerializationReader reader) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle) at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) I've read around and it is because the Large Object Heap is just getting too fragmented, so even preceding the call with a quick check to StringBuilder.EnsureCapacity just causes the OutOfMemoryException to be thrown earlier (and because I'm guessing at what's needed, it might not actually need that much so my check is causing more problems than it is solving). Some opinions are that there's not much I can do about it. Some of the questions I've asked myself: Use less memory - have you checked for leaks? Yes. The memory usage goes up and down, but there's no fundamental growth that guarantees this to happen. Some of the times it fails, it succeeded at that stage previously. Transfer smaller amounts Not an option, this is a third party web service over which I have no control (or at least it would take a long time to resolve, in the meantime I still have a problem) Can you do something to the LOH to make it less likely to fail? ... now this is most fruitful course. It's a 32-bit process (it has to be for various political, technical and boring reasons) but there's normally hundreds of meg free (multiples of the largest amount for which we've seen failures). Can we monitor the LOH? Using perfmon I can track the size of the heaps, but I don't think there's a way to monitor the largest available contiguous block of memory. Question is: any advice or suggestions for things to try?

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >