Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 1139/1338 | < Previous Page | 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146  | Next Page >

  • Is there a good way to execute MySQL statements atomically via JDBC?

    - by javanix
    Suppose I have a table that contains valid data. I would like to modify this data in some way, but I'd like to make sure that if any errors occur with the modification, the table isn't changed and the method returns something to that effect. For instance, (this is kind of a dumb example, but it illustrates the point so bear with me) suppose I want to edit all the entries in a "name" column so that they are properly capitalized. For some reason, I want either ALL of the names to have proper capitalization, or NONE of them to have proper capitalization (and the starting state of the table is that NONE of them do). Is there an already-implemented way to run a batch update on the table and be assured that, if any one of the updates fails, all changes are rolled back and the table remains unchanged? I can think of a few ways to do this by hand (though suggestions are welcomed), but it'd be nice if there was some method I could use that would function this way. I looked at the java.sql.statement.executeBatch() command, but I'm not convinced by the documentation that my table wouldn't be changed if it failed in some manner.

    Read the article

  • How to approach parallel processing of messages?

    - by Dan
    I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches. Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type. The sequential version would be something like this (pseudocode): for each message in message_sequence <- SEQUENTIAL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently. Pros: predictable ordering of messages (ie, we are guaranteed a FIFO processing order) (potentially) lower latency of processing each message Cons: more processing resources available than handlers for a single message type (bad parallelization) bad use of processor cache since message objects need to be copied for each handler to use large overhead for small handlers The pseudocode of this approach would be as follows: for each message in message_sequence <- SEQUENTIAL parallel_for each handler in (handler_table for message.type) apply handler to message <- PARALLEL The second approach is to process the messages in parallel and apply the handlers to each message sequentially. Pros: better use of processor cache (keeps the message object local to all handlers which will use it) small handlers don't impose as much overhead (as long as there are other handlers also to be run) more messages are expected than there are handlers, so the potential for parallelism is greater Cons: Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic) The pseudocode is as follows: parallel_for each message in message_sequence <- PARALLEL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage.. Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)? Thanks!

    Read the article

  • Requested operation requires an OLE DB Session object... - Connecting Excel to SQL server via ADO

    - by Frank V
    I'm attempting to take Excel 2003 and connect it to SQL Server 2000 to run a few dynamicly generated SQL Queries which ultimately filling certain cells. I'm attempting to do this via VBA via ADO (I've tried 2.8 to 2.0) but I'm getting an error while setting the ActiveConnection variable which is inside the ADODB.Connection object. I need to resolve this pretty quick... Requested operation requires an OLE DB Session object, which is not supported by the current provider. I'm honestly not sure what this error means and right now I don't care. How can get this connection to succeed so that I can run my queries? Here is my VB code: Dim SQL As String, RetValue As String SQL = " select top 1 DateTimeValue from SrcTable where x='value' " 'Not the real SQL RetValue = "" Dim RS As ADODB.Recordset Dim Con As New ADODB.Connection Dim Cmd As New ADODB.Command Con.ConnectionString = "Provider=sqloledb;DRIVER=SQL Server;Data Source=Server\Instance;Initial Catalog=MyDB_DC;User Id=<UserName>;Password=<Password>;" Con.CommandTimeout = (60 * 30) Set Cmd.ActiveConnection = Con ''Error occurs here. ' I'm not sure if the rest is right. I've just coded it. Can't get past the line above. Cmd.CommandText = SQL Cmd.CommandType = adCmdText Con.Open Set RS = Cmd.Execute() If Not RS.EOF Then RetValue = RS(0).Value Debug.Print "RetValue is: " & RetValue End If Con.Close I imagine something is wrong with the connection string but I've tried over a dozen variations. Now I'm just shooting in the dark.... Note/Update: To make matters more confusing, if I Google for the error quote above, I get a lot of hits back but nothing seems relevant or I'm not sure what information is relevant.... I've got the VBA code in "Sheet1" under "Microsoft Excel Objects." I've done this before but usually put things in a module. Could this make a difference?

    Read the article

  • JSF: Avoid nested update calls

    - by dhroove
    I don't know if I am on right track or not still asking this question. I my JSF project I have this tree structure: <p:tree id="resourcesTree" value="#{someBean.root}" var="node" selectionMode="single" styleClass="no-border" selection="#{someBean.selectedNode}" dynamic="true"> <p:ajax listener="#{someBean.onNodeSelect}" update=":centerPanel :tableForm :tabForm" event="select" onstart="statusDialog.show();" oncomplete="statusDialog.hide();" /> <p:treeNode id="resourcesTreeNode" > <h:outputText value="#{node}" id="lblNode" /> </p:treeNode> </p:tree> I have to update this tree after I added something or delete something.. But whenever I update this still its nested update also call I mean to say it also call to update these components ":centerPanel :tableForm :tabForm"... This give me error that :tableForm not found in view because this forms load in my central panel and this tree is in my right panel.. So when I am doing some operation on tree is it not always that :tableForm is in my central panel.. (I mean design is something like this only) So now my question is that can I put some condition or there is any way so that I can specify when to update nested components also and when not.... In nut shell is there any way to update only :resoucesTree is such a way that nested updates are not called so that I can avoid error... Thanks in advance.

    Read the article

  • C++: select argmax over vector of classes w.r.t. arbitrary expression

    - by karpathy
    Hello, I have trouble describing my problem so I'll give an example: I have a class description that has a couple of variables in it, for example: class A{ float a, b, c, d; } Now, I maintain a vector<A> that contains many of these classes. What I need to do very very often is to find the object inside this vector that satisfies that one of it's parameters is maximal w.r.t to the others. i.e code looks something like: int maxi=-1; float maxa=-1000; for(int i=0;i<vec.size();i++){ res= vec[i].a; if(res > maxa) { maxa= res; maxi=i; } } return vec[maxi]; However, sometimes I need to find class with maximal a, sometimes with maximal b, sometimes the class with maximal 0.8*a + 0.2*b, sometimes I want a maximal a*VAR + b, where VAR is some variable that is assigned in front, etc. In other words, I need to evaluate an expression for every class, and take the max. I find myself copy-pasting this everywhere, and only changing the single line that defines res. What makes it even more complicated is that even the name of the vector changes. Sometimes it's vec, sometimes it can be something else. I have many vectors that contain A's. This could be changed if this makes the problem too hard. Is there some nice way to avoid this insanity in C++? What's the neatest way to handle this? Thank you!

    Read the article

  • Counting string length in javascript and Ruby on Rails

    - by williamjones
    I've got a text area on a web site that should be limited in length. I'm allowing users to enter 255 characters, and am enforcing that limit with a Rails validation: validates_length_of :body, :maximum => 255 At the same time, I added a javascript char counter like you see on Twitter, to give feedback to the user on how many characters he has already used, and to disable the submit button when over length, and am getting that length in Javascript with a call like this: element.length Lastly, to enforce data integrity, in my Postgres database, I have created this field as a varchar(255) as a last line of defense. Unfortunately, these methods of counting characters do not appear to be directly compatible. Javascript counts the best, in that it counts what users consider as number of characters where everything is a single character. Once the submission hits Rails, however, all of the carriage returns have been converted to \r\n, now taking up 2 characters worth of space, which makes a close call fail Rails validations. Even if I were to handcode a different length validation in Rails, it would still fail when it hits the database I think, though I haven't confirmed this yet. What's the best way for me to make all this work the way the user would want? Best Solution: an approach that would enable me to meet user expectations, where each character of any type is only one character. If this means increasing the length of the varchar database field, a user should not be able to sneakily send a hand-crafted post that creates a row with more than 255 letters. Somewhat Acceptable Solution: a javascript change that enables the user to see the real character count, such that hitting return increments the counter 2 characters at a time, while properly handling all symbols that might have these strange behaviors.

    Read the article

  • Does Android AsyncTaskQueue or similar exist?

    - by Ben L.
    I read somewhere (and have observed) that starting threads is slow. I always assumed that AsyncTask created and reused a single thread because it required being started inside the UI thread. The following (anonymized) code is called from a ListAdapter's getView method to load images asynchronously. It works well until the user moves the list quickly, and then it becomes "janky". final File imageFile = new File(getCacheDir().getPath() + "/img/" + p.image); image.setVisibility(View.GONE); view.findViewById(R.id.imageLoading).setVisibility(View.VISIBLE); (new AsyncTask<Void, Void, Bitmap>() { @Override protected Bitmap doInBackground(Void... params) { try { Bitmap image; if (!imageFile.exists() || imageFile.length() == 0) { image = BitmapFactory.decodeStream(new URL( "http://example.com/images/" + p.image).openStream()); image.compress(Bitmap.CompressFormat.JPEG, 85, new FileOutputStream(imageFile)); image.recycle(); } image = BitmapFactory.decodeFile(imageFile.getPath(), bitmapOptions); return image; } catch (MalformedURLException ex) { // TODO Auto-generated catch block ex.printStackTrace(); return null; } catch (IOException ex) { // TODO Auto-generated catch block ex.printStackTrace(); return null; } } @Override protected void onPostExecute(Bitmap image) { if (view.getTag() != p) // The view was recycled. return; view.findViewById(R.id.imageLoading).setVisibility( View.GONE); view.findViewById(R.id.image) .setVisibility(View.VISIBLE); ((ImageView) view.findViewById(R.id.image)) .setImageBitmap(image); } }).execute(); I'm thinking that a queue-based method would work better, but I'm wondering if there is one or if I should attempt to create my own implementation.

    Read the article

  • Basic Question on storing variables c# for use in other classes

    - by Calibre2010
    Ok guys I basically have a class which takes in 3 strings through the parameter of one of its method signatures. I've then tried to map these 3 strings to global variables as a way to store them. However, when I try to call these global variables from another class after instantiating this class they display as null values. this is the class which gets the 3 strings through the method setDate, and the mapping.. public class DateLogic { public string year1; public string month1; public string day1; public DateLogic() { } public void setDate(string year, string month, string day) { year1 = year; month1 = month; day1 = day; // getDate(); } public string getDate() { return year1 + " " + month1 + " " + day1; } } After this I try call this class from here public static string TimeLine2(this HtmlHelper helper, string myString2) { DateLogic g = new DateLogic(); string sday = g.day1; string smonth = g.month1; string syr = g.year1; } I've been debugging and the values make it all the way to the global variables but when called from this class here it doesnt show them, just shows null. Is this cause I'm creating a brand new instance, how do i resolve this?

    Read the article

  • C# creating a Class, having objects as member variables? I think the objects are garbage collecte

    - by Bryan
    So I have a class that has the following member variables. I have get and set functions for every piece of data in this class. public class NavigationMesh { public Vector3 node; int weight; bool isWall; bool hasTreasure; public NavigationMesh(int x, int y, int z, bool setWall, bool setTreasure) { //default constructor //Console.WriteLine(x + " " + y + " " + z); node = new Vector3(x, y, z); //Console.WriteLine(node.X + " " + node.Y + " " + node.Z); isWall = setWall; hasTreasure = setTreasure; weight = 1; }// end constructor public float getX() { Console.WriteLine(node.X); return node.X; } public float getY() { Console.WriteLine(node.Y); return node.Y; } public float getZ() { Console.WriteLine(node.Z); return node.Z; } public bool getWall() { return isWall; } public void setWall(bool item) { isWall = item; } public bool getTreasure() { return hasTreasure; } public void setTreasure(bool item) { hasTreasure = item; } public int getWeight() { return weight; } }// end class In another class, I have a 2-Dim array that looks like this NavigationMesh[,] mesh; mesh = new NavigationMesh[502,502]; I use a double for loop to assign this, my problem is I cannot get the data I need out of the Vector3 node object after I create this object in my array with my "getters". I've tried making the Vector3 a static variable, however I think it refers to the last instance of the object. How do I keep all of these object in memory? I think there being garbage collected. Any thoughts?

    Read the article

  • Cast object to interface when created via reflection

    - by Al
    I'm trying some stuff out in Android and I'm stuck at when trying to cast a class in another .apk to my interface. I have the interface and various classes in other .apks that implement that interface. I find the other classes using PackageManager's query methods and use Application#createPackageContext() to get the classloader for that context. I then load the class, create a new instance and try to cast it to my interface, which I know it definitely implements. When I try to cast, it throws a class cast exception. I tried various things like loading the interface first, using Class#asSubclass, etc, none of which work. Class#getInterfaces() shows the interface is implemented. My code is below: PackageManager pm = getPackageManager(); List<ResolveInfo> lr = pm.queryIntentServices(new Intent("com.example.some.action"), 0); ArrayList<MyInterface> list = new ArrayList<MyInterface>(); for (ResolveInfo r : lr) { try { Context c = getApplication().createPackageContext(r.serviceInfo.packageName, Context.CONTEXT_IGNORE_SECURITY | Context.CONTEXT_INCLUDE_CODE); ClassLoader cl = c.getClassLoader(); String className = r.serviceInfo.name; if (className != null) { try { Class<?> cls = cl.loadClass(className); Object o = cls.newInstance(); if (o instanceof MyInterface) { //fails list.add((MyInterface) o); } } catch (ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } // some exceptions removed for readability } } catch (NameNotFoundException e1) { e1.printStackTrace(); }

    Read the article

  • Improve Log Exceptions

    - by Jaider
    I am planning to use log4net in a new web project. In my experience, I see how big the log table can get, also I notice that errors or exceptions are repeated. For instance, I just query a log table that have more than 132.000 records, and I using distinct and found that only 2.500 records are unique (~2%), the others (~98%) are just duplicates. so, I came up with this idea to improve logging. Having a couple of new columns: counter and updated_dt, that are updated every time try to insert same record. If want to track the user that cause the exception, need to create a user_log or log_user table, to map N-N relationship. Create this model may made the system slow and inefficient trying to compare all these long text... Here the trick, we should also has a hash column of binary of 16 or 32, that hash the message and the exception, and configure an index on it. We can use HASHBYTES to help us. I am not an expert in DB, but I think that will made the faster way to locate a similar record. And because hashing doesn't guarantee uniqueness, will help to locale those similar record much faster and later compare by message or exception directly to make sure that are unique. This is a theoretical/practical solution, but will it work or bring more complexity? what aspects I am leaving out or what other considerations need to have? the trigger will do the job of insert or update, but is the trigger the best way to do it?

    Read the article

  • ASP.NET ListView, custom DataSources, and editing items

    - by Andrew Shepherd
    The MSDN walkthroughs provide a number of examples where you can drag a DataSource from the toolbox, run through some simple configuration steps, then drag a ListView onto the screen, point it at the DataSource, and hey - you've got full table editing. Now I'm trying to write my own DataSource class (a class that implements System.Web.UI.IDataSource) and my own DataSourceView class. I now assign an instance of this custom DataSource class to the ListView.DataSource propery. The display of all the items is working well. However, updating, inserting and deleting just is not working. I'm overriding every function I can in my DataSourceView class, and they just aren't being called. This is such a huge topic, I'll focus this question on one simple example: When you press the "Edit" button (the button inside the ItemTemplate with a CommandName of "Edit", you expect the ItemTemplate to be replaced by an EditItemTemplate. This did not happen. The only way I could get it to happen was to handle the onitemediting event. protected void _listViewPublicHolidays_ItemEditing(object sender, ListViewEditEventArgs e) { _listViewPublicHolidays.EditIndex = e.NewEditIndex; _listViewPublicHolidays.DataBind(); } This is hardly a problem, but how come I had to do it at all? In the MSDN walkthroughs where I attach a ListView to a LinqDataSource, this code doesn't have to be written. Can someone who's been here before hazard a guess as to what would be different or missing in my custom datasource?

    Read the article

  • How to perform Rails model validation checks within model but outside of filters using ledermann-rails-settings and extensions

    - by user1277160
    Background I'm using ledermann-rails-settings (https://github.com/ledermann/rails-settings) on a Rails 2/3 project to extend virtually the model with certain attributes that don't necessarily need to be placed into the DB in a wide table and it's working out swimmingly for our needs. An additional reason I chose this Gem is because of the post How to create a form for the rails-settings plugin which ties ledermann-rails-settings more closely to the model for the purpose of clean form_for usage for administrator GUI support. It's a perfect solution for addressing form_for support although... Something that I'm running into now though is properly validating the dynamic getters/setters before being passed to the ledermann-rails-settings module. At the moment they are saved immediately, regardless if the model validation has actually fired - I can see through script/console that validation errors are being raised. Example For instance I would like to validate that the attribute :foo is within the range of 0..100 for decimal usage (or even a regex). I've found that with the previous post that I can use standard Rails validators (surprise, surprise) but I want to halt on actually saving any values until those are addressed - ensure that the user of the GUI has given 61.43 as a numerical value. The following code has been borrowed from the quoted post. class User < ActiveRecord::Base has_settings validates_inclusion_of :foo, :in => 0..100 def self.settings_attr_accessor(*args) >>SOME SORT OF UNLESS MODEL.VALID? CHECK HERE args.each do |method_name| eval " def #{method_name} self.settings.send(:#{method_name}) end def #{method_name}=(value) self.settings.send(:#{method_name}=, value) end " end >>END UNLESS end settings_attr_accessor :foo end Anyone have any thoughts here on pulling the state of the model at this point outside of having to put this into a before filter? The goal here is to be able to use the standard validations and avoid rolling custom validation checks for each new settings_attr_accessor that is added. Thanks!

    Read the article

  • Project management: Implementing custom errors in VS compilation process

    - by David Lively
    Like many architects, I've developed coding standards through years of experience to which I expect my developers to adhere. This is especially a problem with the crowd that believes that three or four years of experience makes you a senior-level developer.Approaching this as a training and code review issue has generated limited success. So, I was thinking that it would be great to be able to add custom compile-time errors to the build process to more strictly enforce this and other guidelines. For instance, we use stored procedures for ALL database access, which provides procedure-level security, db encapsulation (table structure is hidden from the app), and other benefits. (Note: I am not interested in starting a debate about this.) Some developers prefer inline SQL or parametrized queries, and that's fine - on their own time and own projects. I'd like a way to add a compilation check that finds, say, anything that looks like string sql = "insert into some_table (col1,col2) values (@col1, @col2);" and generates an error or, in certain circumstances, a warning, with a message like Inline SQL and parametrized queries are not permitted. Or, if they use the var keyword var x = new MyClass(); Variable definitions must be explicitly typed. Do Visual Studio and MSBuild provide a way to add this functionality? I'm thinking that I could use a regular expression to find unacceptable code and generate the correct error, but I'm not sure what, from a performance standpoint, is the best way to to integrate this into the build process. We could add a pre- or post-build step to run a custom EXE, but how can I return line- and file-specifc errors? Also, I'd like this to run after compilation of each file, rather than post-link. Is a regex the best way to perform this type of pattern matching, or should I go crazy and run the code through a C# parser, which would allow node-level validation via the parse tree? I'd appreciate suggestions and tales of prior experience.

    Read the article

  • capture delete key in CListCtrl and do soem processing

    - by user333422
    Hi, I have a class which inherits from CListCtrl class, say class list. I have another class dlg, which inherits from CDialog. Class dlg contains an instance of class list. I have got a delete button in class dlg, on which I delete the selected item in listCtrl and do lots of other processing. I want the same functionality on delete key. I added OnKeyDown() fn is my class list, where I can capture VK_DELETE key. But my problem is that, how do I do otehr processing that I need to do in dialog class. All that processing is dlg class based not list class based. I have many such dlg classes with different data and in every dlg class processing is different. I tried capturing VK_DELETE in dialog class, but it doesn't capture it if focus is on list class. I am totally stuck and have no idea, how to do this. Please give me some idea how i can do this. Thanks, SG

    Read the article

  • Discussion on SEO best-practices for site development involving php...

    - by Bradley Herman
    Recently in our work, I've started getting some experience with SEO (finally). It's something I've put off for a long time because I've always maintained that SEO is a buzz-word b.s. pseudo-science and more about providing quality, relevant content (assuming proper header tags and the basics are covered). However, sometimes a client doesn't have stellar content yet still demands SEO and high rankings. While it's not how I design sites 100% of the time (as design dictates structure), I typically create a basic template from the design my boss gives me, then I optimize it, and then strip the top and bottom and move those to header.php and footer.php, using the following to bring in the header and footer based on AJAX versus HTML requests: <?php if($_SERVER['HTTP_X_REQUESTED_WITH']==''){ include('includes/header.php'); }?> #content here <?php if($_SERVER['HTTP_X_REQUESTED_WITH']==''){ include('includes/footer.php'); }?> Then, I use jQuery to intercept page requests and I use AJAX to fill in, for example, a #copy div with the new content. This avoids unnecessarily loading all the header and footer info everytime, but still allows users without Java to access pages without any problems. (also to think about, depending on size of content, do the extra http requests added using this method render it more of a server strain versus a single, larger file?) I don't have a really solid understanding of the meta keywords and their SEO significance, but as I recall reading, the keywords, title, and description on a page should match up to the pages content--ie. each page should have slightly different keywords/description while retaining some common ground. What I'm getting at here is trying to foster a discussion on whether my approach is flawed to begin with, if there are things I can do (within reason) that keep the site structure simple but allow for better SEO practices, or if my SEO understandings are wrong. This isn't a question, per say, but hopefully a constructive discussion here that more than just I can learn from. I appreciate any responses and hope to hear from you. Thanks!

    Read the article

  • Throwing a C++ exception after an inline-asm jump

    - by SoapBox
    I have some odd self modifying code, but at the root of it is a pretty simple problem: I want to be able to execute a jmp (or a call) and then from that arbitrary point throw an exception and have it caught by the try/catch block that contained the jmp/call. But when I do this (in gcc 4.4.1 x86_64) the exception results in a terminate() as it would if the exception was thrown from outside of a try/catch. I don't really see how this is different than throwing an exception from inside of some far-flung library, yet it obviously is because it just doesn't work. How can I execute a jmp or call but still throw an exception back to the original try/catch? Why doesn't this try/catch continue to handle these exceptions as it would if the function was called normally? The code: #include <iostream> #include <stdexcept> using namespace std; void thrower() { cout << "Inside thrower" << endl; throw runtime_error("some exception"); } int main() { cout << "Top of main" << endl; try { asm volatile ( "jmp *%0" // same thing happens with a call instead of a jmp : : "r"((long)thrower) : ); } catch (exception &e) { cout << "Caught : " << e.what() << endl; } cout << "Bottom of main" << endl << endl; } The expected output: Top of main Inside thrower Caught : some exception Bottom of main The actual output: Top of main Inside thrower terminate called after throwing an instance of 'std::runtime_error' what(): some exception Aborted

    Read the article

  • Designing DAOs around a JSON API for iPhone Development

    - by Bob Spryn
    So I've been trying to design a clean way of grabbing data for my models in iPhone land. All the data for my application is coming from JSON API's. So right now when a VC needs some models, it does the JSON call itself (asynch) and when it receives the data, it builds the models. It works, but I'm trying to think of a cleaner method whereby the DAO's retrieve the information for me and return the models, all in an async manner. My initial thought is build a protocol for my DAOs, such that the VC would instantiate a DAO and make itself the delegate. When you requested data [DAOinstance getAllUsers] the DAO would do all the network request stuff, and then when it had the data, it would call a method on its delegate (the VC) to pass the data. So I think that's a cool solution, but realized that if I needed to use the same DAO for different purposes in the same VC, my delegate method would have to branch logic depending on which DAO instance initiated the request. So my second thought was to be able to pass 'handler' selectors to the DAO object a la typical javascript patterns. So instead of an official protocol, I would say something like [DAOinstance getAllUsersWithSelector:"TheHandlerFunctionOnMyVC:"] Then when the DAO completed its network activities, it would call the passed selector on the VC, and pass the data back. So am I headed in the wrong direction entirely here? Seems like maybe an ok way to go. Any pointers or articles on designing this kind of data layer would be sweet. Thanks! Bob

    Read the article

  • Divide and conquer of large objects for GC performance

    - by Aperion
    At my work we're discussing different approaches to cleaning up a large amount of managed ~50-100MB memory.There are two approaches on the table (read: two senior devs can't agree) and not having the experience the rest of the team is unsure of what approach is more desirable, performance or maintainability. The data being collected is many small items, ~30000 which in turn contains other items, all objects are managed. There is a lot of references between these objects including event handlers but not to outside objects. We'll call this large group of objects and references as a single entity called a blob. Approach #1: Make sure all references to objects in the blob are severed and let the GC handle the blob and all the connections. Approach #2: Implement IDisposable on these objects then call dispose on these objects and set references to Nothing and remove handlers. The theory behind the second approach is since the large longer lived objects take longer to cleanup in the GC. So, by cutting the large objects into smaller bite size morsels the garbage collector will processes them faster, thus a performance gain. So I think the basic question is this: Does breaking apart large groups of interconnected objects optimize data for garbage collection or is better to keep them together and rely on the garbage collection algorithms to processes the data for you? I feel this is a case of pre-optimization, but I do not know enough of the GC to know what does help or hinder it.

    Read the article

  • Executing logic before save or validation with EF Code-First Models

    - by Ryan Norbauer
    I'm still getting accustomed to EF Code First, having spent years working with the Ruby ORM, ActiveRecord. ActiveRecord used to have all sorts of callbacks like before_validation and before_save, where it was possible to modify the object before it would be sent off to the data layer. I am wondering if there is an equivalent technique in EF Code First object modeling. I know how to set object members at the time of instantiation, of course, (to set default values and so forth) but sometimes you need to intervene at different moments in the object lifecycle. To use a slightly contrived example, say I have a join table linking Authors and Plays, represented with a corresponding Authoring object: public class Authoring { public int ID { get; set; } [Required] public int Position { get; set; } [Required] public virtual Play Play { get; set; } [Required] public virtual Author Author { get; set; } } where Position represents a zero-indexed ordering of the Authors associated to a given Play. (You might have a single "South Pacific" Play with two authors: a "Rodgers" author with a Position 0 and a "Hammerstein" author with a Position 1.) Let's say I wanted to create a method that, before saving away an Authoring record, it checked to see if there were any existing authors for the Play to which it was associated. If no, it set the Position to 0. If yes, it would find set the Position of the highest value associated with that Play and increment by one. Where would I implement such logic within an EF code first model layer? And, in other cases, what if I wanted to massage data in code before it is checked for validation errors? Basically, I'm looking for an equivalent to the Rails lifecycle hooks mentioned above, or some way to fake it at least. :)

    Read the article

  • Are there pitfalls to using static class/event as an application message bus

    - by Doug Clutter
    I have a static generic class that helps me move events around with very little overhead: public static class MessageBus<T> where T : EventArgs { public static event EventHandler<T> MessageReceived; public static void SendMessage(object sender, T message) { if (MessageReceived != null) MessageReceived(sender, message); } } To create a system-wide message bus, I simply need to define an EventArgs class to pass around any arbitrary bits of information: class MyEventArgs : EventArgs { public string Message { get; set; } } Anywhere I'm interested in this event, I just wire up a handler: MessageBus<MyEventArgs>.MessageReceived += (s,e) => DoSomething(); Likewise, triggering the event is just as easy: MessageBus<MyEventArgs>.SendMessage(this, new MyEventArgs() {Message="hi mom"}); Using MessageBus and a custom EventArgs class lets me have an application wide message sink for a specific type of message. This comes in handy when you have several forms that, for example, display customer information and maybe a couple forms that update that information. None of the forms know about each other and none of them need to be wired to a static "super class". I have a couple questions: fxCop complains about using static methods with generics, but this is exactly what I'm after here. I want there to be exactly one MessageBus for each type of message handled. Using a static with a generic saves me from writing all the code that would maintain the list of MessageBus objects. Are the listening objects being kept "alive" via the MessageReceived event? For instance, perhaps I have this code in a Form.Load event: MessageBus<CustomerChangedEventArgs>.MessageReceived += (s,e) => DoReload(); When the Form is Closed, is the Form being retained in memory because MessageReceived has a reference to its DoReload method? Should I be removing the reference when the form closes: MessageBus<CustomerChangedEventArgs>.MessageReceived -= (s,e) => DoReload();

    Read the article

  • Dynamically overriding an abstract method in c#

    - by ng
    I have the following abstract class public abstract class AbstractThing { public String GetDescription() { return "This is " + GetName(); } public abstract String GetName(); } Now I would like to implement some new dynamic types from this like so. AssemblyName assemblyName = new AssemblyName(); assemblyName.Name = "My.TempAssembly"; AssemblyBuilder assemblyBuilder = Thread.GetDomain().DefineDynamicAssembly(assemblyName, AssemblyBuilderAccess.Run); ModuleBuilder moduleBuilder = assemblyBuilder.DefineDynamicModule("DynamicThings"); TypeBuilder typeBuilder = moduleBuilder.DefineType(someName + "_Thing", TypeAttributes.Public | TypeAttributes.Class, typeof(AbstractThing)); MethodBuilder methodBuilder = typeBuilder.DefineMethod("GetName", MethodAttributes.Public | MethodAttributes.ReuseSlot | MethodAttributes.Virtual | MethodAttributes.HideBySig, null, Type.EmptyTypes); ILGenerator msil = methodBuilder.GetILGenerator(); msil.EmitWriteLine(selectionList); msil.Emit(OpCodes.Ret); However when I try to instantiate via typeBuilder.CreateType(); I get an exception saying that there is no implementation for GetName. Is there something I am doing wrong here. I can not see the problem. Also, what would be the restrictions on instantiating such a class by name? For instance if I tried to instantiate via "My.TempAssembly.x_Thing" would it be availble for instantiation without the Type generated?

    Read the article

  • Technical choices in unmarshaling hash-consed data

    - by Pascal Cuoq
    There seems to be quite a bit of folklore knowledge floating about in restricted circles about the pitfalls of hash-consing combined with marshaling-unmarshaling of data. I am looking for citable references to these tidbits. For instance, someone once pointed me to library aterm and mentioned that the authors had clearly thought about this and that the representation on disk was bottom-up (children of a node come before the node itself in the data stream). This is indeed the right way to do things when you need to re-share each node (with a possible identical node already in memory). This re-sharing pass needs to be done bottom-up, so the unmarshaling itself might as well be, too, so that it's possible to do everything in a single pass. I am in the process of describing difficulties encountered in our own context, and the solutions we found. I would appreciate any citable reference to the kind of aforementioned folklore knowledge. Some people obviously have encountered the problems before (the aterm library is only one example). But I didn't find anything in writing. Even the little piece of information I have about aterm is hear-say. I am not worried it's not reliable (you can't make this up), but "personal communication" and "look how it's done in the source code" are considered poor form in citations. I have enough references on hash-consing alone. I am only interested in references where it interferes with other aspects of programming, such as marshaling or distribution.

    Read the article

  • Hibernate noob fetch join problem

    - by Bruce
    Hi all I have two classes, Test2 and Test3. Test2 has an attribute test3 that is an instance of Test3. In other words, I have a unidirectional OneToOne association, with test2 having a reference to test3. When I select Test2 from the db, I can see that a separate select is being made to get the details of the associated test3 class. This is the famous 1+N selects problem. To fix this to use a single select, I am trying to use the fetch=join annotation, which I understand to be @Fetch(FetchMode.JOIN) However, with fetch set to join, I still see separate selects. Here are the relevant portions of my setup.. hibernate.cfg.xml: <property name="max_fetch_depth">2</property> Test2: public class Test2 { @OneToOne (cascade=CascadeType.ALL , fetch=FetchType.EAGER) @JoinColumn (name="test3_id") @Fetch(FetchMode.JOIN) public Test3 getTest3() { return test3; } NB I set the FetchType to EAGER out of desperation, even though it defaults to EAGER anyway for OneToOne mappings, but it made no difference. Thanks for any help! Edit: I've pretty much given up on trying to use FetchMode.JOIN - can anyone confirm that they have got it to work ie produce a left outer join? In the docs I see that "Usually, the mapping document is not used to customize fetching. Instead, we keep the default behavior, and override it for a particular transaction, using left join fetch in HQL" If I do a left join fetch instead: query = session.createQuery("from Test2 t2 left join fetch t2.test3"); then I do indeed get the results I want - ie a left outer join in the query.

    Read the article

  • Accepting more simultaneous keyboard inputs

    - by unknownthreat
    Sometimes, a normal computer keyboard will only accept user's inputs up to a certain key simultaneously. I got a logitech keyboard that can accept up to 3-4 key presses at the same time. The computer does not accept any more input if you press more than 4 keys for this keyboard. And it also depends on certain areas of your keyboard as well. Some locations allow more key to be pressed (like the arrow keys), while some locations permit you to press only 1-2 keys. This also differs from keyboard to keyboard as well. Some older keyboards only accept up 1-2 keys. This isn't problematic with usual office work, but when it comes to gaming. For instance, imagine a platform game, where you have to jump, attack, and control direction at the same time. This implies several key presses and some keyboards cannot accept such simultaneous input. However, I've tried this on several games and the amount of possible keyboard inputs seem to be also different. Therefore, we have two issues: Keyboards have different amount of simultaneous inputs. Some games can accept more keyboard inputs than other games. At first, I thought this is hardware only problem, but why do some programs behave differently? Why some programs can accept more keyboard inputs than other programs? So how can we write our programs to accept more keyboard inputs?

    Read the article

< Previous Page | 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146  | Next Page >