Search Results

Search found 29235 results on 1170 pages for 'dynamic management objects'.

Page 595/1170 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • Thread-safe data structure design

    - by Inso Reiges
    Hello, I have to design a data structure that is to be used in a multi-threaded environment. The basic API is simple: insert element, remove element, retrieve element, check that element exists. The structure's implementation uses implicit locking to guarantee the atomicity of a single API call. After i implemented this it became apparent, that what i really need is atomicity across several API calls. For example if a caller needs to check the existence of an element before trying to insert it he can't do that atomically even if each single API call is atomic: if(!data_structure.exists(element)) { data_structure.insert(element); } The example is somewhat awkward, but the basic point is that we can't trust the result of "exists" call anymore after we return from atomic context (the generated assembly clearly shows a minor chance of context switch between the two calls). What i currently have in mind to solve this is exposing the lock through the data structure's public API. This way clients will have to explicitly lock things, but at least they won't have to create their own locks. Is there a better commonly-known solution to these kinds of problems? And as long as we're at it, can you advise some good literature on thread-safe design? EDIT: I have a better example. Suppose that element retrieval returns either a reference or a pointer to the stored element and not it's copy. How can a caller be protected to safely use this pointer\reference after the call returns? If you think that not returning copies is a problem, then think about deep copies, i.e. objects that should also copy another objects they point to internally. Thank you.

    Read the article

  • OO vs Simplicity when it comes to user interaction

    - by Oetzi
    Firstly, sorry if this question is rather vague but it's something I'd really like an answer to. As a project over summer while I have some downtime from Uni I am going to build a monopoly game. This question is more about the general idea of the problem however, rather than the specific task I'm trying to carry out. I decided to build this with a bottom up approach, creating just movement around a forty space board and then moving on to interaction with spaces. I realised that I was quite unsure of the best way of proceeding with this and I am torn between two design ideas: Giving every space its own object, all sub-classes of a Space object so the interaction can be defined by the space object itself. I could do this by implementing different land() methods for each type of space. Only giving the Properties and Utilities (as each property has unique features) objects and creating methods for dealing with the buying/renting etc in the main class of the program (or Board as I'm calling it). Spaces like go and super tax could be implemented by a small set of conditionals checking to see if player is on a special space. Option 1 is obviously the OO (and I feel the correct) way of doing things but I'd like to only have to handle user interaction from the programs main class. In other words, I don't want the space objects to be interacting with the player. Why? Errr. A lot of the coding I've done thus far has had this simplicity but I'm not sure if this is a pipe dream or not for larger projects. Should I really be handling user interaction in an entirely separate class? As you can see I am quite confused about this situation. Is there some way round this? And, does anyone have any advice on practical OO design that could help in general?

    Read the article

  • Is it important to dispose SolidBrush and Pen?

    - by Joe
    I recently came across this VerticalLabel control on CodeProject. I notice that the OnPaint method creates but doesn't dispose Pen and SolidBrush objects. Does this matter, and if so how can I demonstrate whatever problems it can cause? EDIT This isn't a question about the IDisposable pattern in general. I understand that callers should normally call Dispose on any class that implements IDisposable. What I want to know is what problems (if any) can be expected when GDI+ object are not disposed as in the above example. It's clear that, in the linked example, OnPaint may be called many times before the garbage collector kicks in, so there's the potential to run out of handles. However I suspect that GDI+ internally reuses handles in some circumstances (for example if you use a pen of a specific color from the Pens class, it is cached and reused). What I'm trying to understand is whether code like that in the linked example will be able to get away with neglecting to call Dispose. And if not, to see a sample that demonstrated what problems it can cause. I should add that I have very often (including the OnPaint documentation on MSDN) seen WinForms code samples that fail to dispose GDI+ objects.

    Read the article

  • Using fields from an association (has_many) model with formtastic in rails

    - by pduersteler
    I searched and tried a lot, but I can't accomplish it as I want.. so here's my problem. class Moving < ActiveRecord::Base has_many :movingresources, :dependent => :destroy has_many :resources, :through => :movingresources end class Movingresource < ActiveRecord::Base belongs_to :moving belongs_to :resource end class Resource < ActiveRecord::Base has_many :movingresources has_many :movings, :through => :movingresources end Movingresources contains additional fields, like "quantity". We're working on the views for 'bill'. Thanks to formtastic to simplify the whole relationship thing by just writing <%= form.input :workers, :as => :check_boxes %> and i get a real nice checkbox list. But what I haven't found out so far is: How can i use the additional fields from 'movingresource', next or under each checkbox my desired fields from that model? I saw different approaches, mainly with manually looping through an array of objects and creating the appropriate forms, using :for in a form.inputs part, or not. But none of those solutions were clean (e.g. worked for the edit view but not for new because the required objects were not built or generated and generating them caused a mess). I want to know your solutions for this! :-)

    Read the article

  • Javascript function using "this = " gives "Invalid left-hand side in assignment"

    - by Brian M. Hunt
    I am trying to get a Javascript object to use the "this" assignments of another objects' constructor, as well as assume all that objects' prototype functions. Here's an example of what I'm attempting to accomplish: /* The base - contains assignments to 'this', and prototype functions */ function ObjX(a,b) { this.$a = a, $b = b; } ObjX.prototype.getB() { return this.$b; } function ObjY(a,b,c) { // here's what I'm thinking should work: this = ObjX(a, b * 12); /* and by 'work' I mean ObjY should have the following properties: * ObjY.$a == a, ObjY.$b == b * 12, * and ObjY.getB() == ObjX.prototype.getB() * ... unfortunately I get the error: * Uncaught ReferenceError: Invalid left-hand side in assignment */ this.$c = c; // just to further distinguish ObjY from ObjX. } I'd be grateful for your thoughts on how to have ObjY subsume ObjX's assignments to 'this' (i.e. not have to repeat all the this.$* = * assignments in ObjY's constructor) and have ObjY assume ObjX.prototype. My first thought is to try the following: function ObjY(a,b,c) { this.prototype = new ObjX(a,b*12); } Ideally I'd like to learn how to do this in a prototypal way (i.e. not have to use any of those 'classic' OOP substitutes like Base2). It may be noteworthy that ObjY will be anonymous (e.g. factory['ObjX'] = function(a,b,c) { this = ObjX(a,b*12); ... }) -- if I've the terminology right. Thank you.

    Read the article

  • Visual Studio + Database Edition + CDC = Deploy Fail

    - by Ben
    Hi All, I've got a database using change data capture (CDC) that is created from a Visual Studio database project (GDR2). My problem is that I have a stored procedure that is analyzing the CDC information and then returning data. How is that a problem you ask? Well, the order of operation is as follows. Pre-deployment Script Tables Indexes, keys, etc. Procedures Post-deployment Script Inside the post-deployment script is where I enable CDC. Here-in lies the problem. The procedure that is acting on the CDC tables is bombing because they don't exist yet! I've tried to put the call to sys.sp_cdc_enable_table in the script that creates the table, but it doesn't like that. Error 102 TSD03070: This statement is not recognized in this context. C:...\Schema Objects\Schemas\dbo\Tables\Foo.table.sql 20 1 Foo Is there a better/built-in way to enable CDC such that it's references are available when the stored procedures are created? Is there a way to run a script after tables are created but before other objects are created? How about a way to create the procedure dependencies be damned? Or maybe I'm just doing things that shouldn't be done?!?! Now, I have a work around. Comment out the sproc body Deploy (CDC is created) Uncomment sproc Deploy Everything is great until the next time I update a CDC tracked table. Then I need to comment out the 'offending' procedure. Thanks for reading my question and thanks for your help!

    Read the article

  • why is OOP hard for me?

    - by netrox
    I have trouble writing OOP in PHP... I understand the concept but I never create classes for my projects... mainly because it's often a small project and nothing complex. But when I read OOP, it seems more difficult to code than writing simple procedural statements. It also seems to take a lot of room as well with so many empty abstract classes and that can be easily lost in the land of objects... it's becoming like a junkyard to me. Also, I noticed that virtually all instructions on how to use OOP use "car" or "cat" or "dog" analogies. Hello... we're not dealing with animals or cars... we're dealing with windows or consoles. You can talk about analogies to death and I will never learn. What I want is see a code that's written to show how objects are created - not, "aCow-moo!" For example, I want to see a browser window object displaying say... three inputs. I want to see an "object" created to output a window with three inputs then I want to see how overriding works, like change the window object to display only two inputs instead of three inputs. I think that would make learning more easy, wouldn't it? Any recommended tutorials of that nature instead of quacks, moos, and woofs.

    Read the article

  • Deploying ASP.Net MVC application

    - by a_m0d
    I've recently reached the stage where an ASP.net MVC application I am developing is ready to be deployed to the production server. I've worked out how to publish the application - I've got all the files on the server, and can access them over the internet. However, I can't work out how to deploy my database. The server has the SQL Server Management Studio Express installed, as the database used is a SQL Server Express database. I have the server instance up and running - I just don't know how to add the tables, etc. to the database. I have created the "CREATE TABLE" scripts on the development machine, but as far as I can see, Management Studio does not provide any way to actually run these scripts. I have looked through all the menu items that I could see, and none of them worked. Even using the "Create new query..." option and pasting the script in didn't work. When I try "File-Open..." and select a script to run, set the correct database from the dropdown list on the toolbar, and then execute the script, it complains about not finding the database file (even when I set the USE [...] statement to the correct path. Deleting the USE [...] statement, the script complains that it can't find the [dbo].[Invoices] object; however, it shouldn't be able to find it, because its trying to create it! tl;dr: What's the best way to make sure that the database on the production machine matches the database on my development machine?

    Read the article

  • Py_INCREF/DECREF: When

    - by Izz ad-Din Ruhulessin
    Is one correct in stating the following: If a Python object is created in a C function, but the function doesn't return it, no INCREF is needed, but a DECREF is. [false]If the function does return it, you do need to INCREF, in the function that receives the return value.[/false] When assigning C typed variables as attributes, like double, int etc., to the Python object, no INCREF or DECREF is needed. Assigning Python objects as attributes to your other Python objects goes like this: PyObject *foo; foo = bar // A Python object tmp = self->foo; Py_INCREF(foo); self->foo = foo; Py_XDECREF(tmp); //taken from the manual, but it is unclear if this works in every situation EDIT: -- can I safely use this in every situation? (haven't run into one where it caused me problems) dealloc of a Python object needs to DECREF for every other Python object that it has as an attribute, but not for attributes that are C types. Edit With 'C type as an attribute I mean bar and baz: typedef struct { PyObject_HEAD PyObject *foo; int bar; double baz; } FooBarBaz;

    Read the article

  • Checking for nil in view in Ruby on Rails

    - by seaneshbaugh
    I've been working with Rails for a while now and one thing I find myself constantly doing is checking to see if some attribute or object is nil in my view code before I display it. I'm starting to wonder if this is always the best idea. My rationale so far has been that since my application(s) rely on user input unexpected things can occur. If I've learned one thing from programming in general it's that users inputting things the programmer didn't think of is one of the biggest sources of run-time errors. By checking for nil values I'm hoping to sidestep that and have my views gracefully handle the problem. The thing is though I typically for various reasons have similar nil or invalid value checks in either my model or controller code. I wouldn't call it code duplication in the strictest sense, but it just doesn't seem very DRY. If I've already checked for nil objects in my controller is it okay if my view just assumes the object truly isn't nil? For attributes that can be nil that are displayed it makes sense to me to check every time, but for the objects themselves I'm not sure what is the best practice. Here's a simplified, but typical example of what I'm talking about: controller code def show @item = Item.find_by_id(params[:id]) @folders = Folder.find(:all, :order => 'display_order') if @item == nil or @item.folder == nil redirect_to(root_url) and return end end view code <% if @item != nil %> display the item's attributes here <% if @item.folder != nil %> <%= link_to @item.folder.name, folder_path(@item.folder) %> <% end %> <% else %> Oops! Looks like something went horribly wrong! <% end %> Is this a good idea or is it just silly?

    Read the article

  • How can I strip Python logging calls without commenting them out?

    - by cdleary
    Today I was thinking about a Python project I wrote about a year back where I used logging pretty extensively. I remember having to comment out a lot of logging calls in inner-loop-like scenarios (the 90% code) because of the overhead (hotshot indicated it was one of my biggest bottlenecks). I wonder now if there's some canonical way to programmatically strip out logging calls in Python applications without commenting and uncommenting all the time. I'd think you could use inspection/recompilation or bytecode manipulation to do something like this and target only the code objects that are causing bottlenecks. This way, you could add a manipulator as a post-compilation step and use a centralized configuration file, like so: [Leave ERROR and above] my_module.SomeClass.method_with_lots_of_warn_calls [Leave WARN and above] my_module.SomeOtherClass.method_with_lots_of_info_calls [Leave INFO and above] my_module.SomeWeirdClass.method_with_lots_of_debug_calls Of course, you'd want to use it sparingly and probably with per-function granularity -- only for code objects that have shown logging to be a bottleneck. Anybody know of anything like this? Note: There are a few things that make this more difficult to do in a performant manner because of dynamic typing and late binding. For example, any calls to a method named debug may have to be wrapped with an if not isinstance(log, Logger). In any case, I'm assuming all of the minor details can be overcome, either by a gentleman's agreement or some run-time checking. :-)

    Read the article

  • Rails: generating URLs for actions in JSON response

    - by Chris Butler
    In a view I am generating an HTML canvas of figures based on model data in an app. In the view I am preloading JSON model data in the page like this (to avoid an initial request back): <script type="text/javascript" charset="utf-8"> <% ActiveRecord::Base.include_root_in_json = false -%> var objects = <%= @objects.to_json(:include => :other_objects) %>; ... Based on mouse (or touch) interaction I want to redirect to other parts of my app that are model specific (such as view, edit, delete, etc.). Rather than hard code the URLs in my JavaScript I want to generate them from Rails (which means it always adapts the latest routes). It seems like I have one of three options: Add an empty attr to the model that the controller fills in with the appropriate URL (we don't want to use routes in the model) before the JSON is generated Generate custom JSON where I add the different URLs manually Generate the URL as a template from Rails and replace the IDs in JavaScript as appropriate I am starting to lean towards #1 for ease of implementation and maintainability. Are there any other options that I am missing? Is #1 not the best? Thanks! Chris

    Read the article

  • FluentNHibernate mapping of composite foreign keys

    - by Faron
    I have an existing database schema and wish to replace the custom data access code with Fluent.NHibernate. The database schema cannot be changed since it already exists in a shipping product. And it is preferable if the domain objects did not change or only changed minimally. I am having trouble mapping one unusual schema construct illustrated with the following table structure: CREATE TABLE [Container] ( [ContainerId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Container] PRIMARY KEY ( [ContainerId] ASC ) ) CREATE TABLE [Item] ( [ItemId] [uniqueidentifier] NOT NULL, [ContainerId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Item] PRIMARY KEY ( [ContainerId] ASC, [ItemId] ASC ) ) CREATE TABLE [Property] ( [ContainerId] [uniqueidentifier] NOT NULL, [PropertyId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Property] PRIMARY KEY ( [ContainerId] ASC, [PropertyId] ASC ) ) CREATE TABLE [Item_Property] ( [ContainerId] [uniqueidentifier] NOT NULL, [ItemId] [uniqueidentifier] NOT NULL, [PropertyId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Item_Property] PRIMARY KEY ( [ContainerId] ASC, [ItemId] ASC, [PropertyId] ASC ) ) CREATE TABLE [Container_Property] ( [ContainerId] [uniqueidentifier] NOT NULL, [PropertyId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Container_Property] PRIMARY KEY ( [ContainerId] ASC, [PropertyId] ASC ) ) The existing domain model has the following class structure: The Property class contains other members representing the property's name and value. The ContainerProperty and ItemProperty classes have no additional members. They exist only to identify the owner of the Property. The Container and Item classes have methods that return collections of ContainerProperty and ItemProperty respectively. Additionally, the Container class has a method that returns a collection of all of the Property objects in the object graph. My best guess is that this was either a convenience method or a legacy method that was never removed. The business logic mainly works with Item (as the aggregate root) and only works with a Container when adding or removing Items. I have tried several techniques for mapping this but none work so I won't include them here unless someone asks for them. How would you map this?

    Read the article

  • Setting up relations/mappings for a SQLAlchemy many-to-many database

    - by Brent Ramerth
    I'm new to SQLAlchemy and relational databases, and I'm trying to set up a model for an annotated lexicon. I want to support an arbitrary number of key-value annotations for the words which can be added or removed at runtime. Since there will be a lot of repetition in the names of the keys, I don't want to use this solution directly, although the code is similar. My design has word objects and property objects. The words and properties are stored in separate tables with a property_values table that links the two. Here's the code: from sqlalchemy import Column, Integer, String, Table, create_engine from sqlalchemy import MetaData, ForeignKey from sqlalchemy.orm import relation, mapper, sessionmaker from sqlalchemy.ext.declarative import declarative_base engine = create_engine('sqlite:///test.db', echo=True) meta = MetaData(bind=engine) property_values = Table('property_values', meta, Column('word_id', Integer, ForeignKey('words.id')), Column('property_id', Integer, ForeignKey('properties.id')), Column('value', String(20)) ) words = Table('words', meta, Column('id', Integer, primary_key=True), Column('name', String(20)), Column('freq', Integer) ) properties = Table('properties', meta, Column('id', Integer, primary_key=True), Column('name', String(20), nullable=False, unique=True) ) meta.create_all() class Word(object): def __init__(self, name, freq=1): self.name = name self.freq = freq class Property(object): def __init__(self, name): self.name = name mapper(Property, properties) Now I'd like to be able to do the following: Session = sessionmaker(bind=engine) s = Session() word = Word('foo', 42) word['bar'] = 'yes' # or word.bar = 'yes' ? s.add(word) s.commit() Ideally this should add 1|foo|42 to the words table, add 1|bar to the properties table, and add 1|1|yes to the property_values table. However, I don't have the right mappings and relations in place to make this happen. I get the sense from reading the documentation at http://www.sqlalchemy.org/docs/05/mappers.html#association-pattern that I want to use an association proxy or something of that sort here, but the syntax is unclear to me. I experimented with this: mapper(Word, words, properties={ 'properties': relation(Property, secondary=property_values) }) but this mapper only fills in the foreign key values, and I need to fill in the other value as well. Any assistance would be greatly appreciated.

    Read the article

  • Drawbacks with using Class Methods in Objective C.

    - by RickiG
    Hi I was wondering if there are any memory/performance drawbacks, or just drawbacks in general, with using Class Methods like: + (void)myClassMethod:(NSString *)param { // much to be done... } or + (NSArray*)myClassMethod:(NSString *)param { // much to be done... return [NSArray autorelease]; } It is convenient placing a lot of functionality in Class Methods, especially in an environment where I have to deal with memory management(iPhone), but there is usually a catch when something is convenient? An example could be a thought up Web Service that consisted of a lot of classes with very simple functionality. i.e. TomorrowsXMLResults; TodaysXMLResults; YesterdaysXMLResults; MondaysXMLResults; TuesdaysXMLResults; . . . n I collect a ton of these in my Web Service Class and just instantiate the web service class and let methods on this class call Class Methods on the 'Results' Classes. The classes are simple but they handle large amount of Xml, instantiate lots of objects etc. I guess I am asking if Class Methods lives or are treated different on the stack and in memory than messages to instantiated objects? Or are they just instantiated and pulled down again behind the scenes and thus, just a way of saving a few lines of code?

    Read the article

  • Events and references pattern

    - by serhio
    In a project I have the following relation between BO and GUI By e.g. G could represent a graphic with time lines, C a TimeLine curve, P - points of that curve and T the time that represents each point. Each GUI object is associated with the BO corresponding object. When T changes GUI P captures the Changed event and changes its location. So, when G should be modified, it modifies internally its objects and as result T changes, P moves and the GuiG visually changes, everything is OK. But there is an inconvenient of this architecture... BO should not be recreated, because this will breack the link between BO and GUIO. In particular, GUI P should always have the same reference of T. If in a business logic I do by e.g. P1.T = new T(this.T + 10) GUI_P1 will not move anymore, because it wait an event from the reference of former P1.T object, that does not belongs to P1 anymore. So the solution was to always modify the existing objects, not to recreate it. But here is an other inconvenient: performance. Say I have a ready newC object that should replace the older one. Instead of doing G1.C = newC I should do foreach T in foreach P in C replace with T from P from newC. Is there an other more optimal way to do it?

    Read the article

  • Convert date to string upon saving a doctrine record

    - by takteek
    Hi, I'm trying to migrate one of my PHP projects to Doctrine. I've never used it before so there are a few things I don't understand. In my current code, I have a class similar to this: class ScheduleItem { private Date start; //A PEAR Date object. private Date end; public function getStart() { return $this-start; } public function setStart($val) { $this-start = $val; } public function getEnd() { return $this-end; } public function setEnd($val) { $this-end= $val; } } I have a ScheduleItemDAO class with methods like save(), getByID(), etc. When loading from and saving to the database, the DAO class converts the Date objects to and from strings so they can be stored in a timestamp field. In my attempt to move to Doctrine, I created a new class like this: class ScheduleItem extends Doctrine_Record { public function setTableDefinition() { $this-hasColumn('start', 'timestamp'); $this-hasColumn('end', 'timestamp'); } } I had hoped I would be able to use Date objects for the start and end times, and have them converted to strings when they are saved to the database. How can I accomplish this?

    Read the article

  • Handling form from different view and passing form validation through session in django

    - by Mo J. Mughrabi
    I have a requirement here to build a comment-like app in my django project, the app has a view to receive a submitted form process it and return the errors to where ever it came from. I finally managed to get it to work, but I have doubt for the way am using it might be wrong since am passing the entire validated form in the session. below is the code comment/templatetags/comment.py @register.inclusion_tag('comment/form.html', takes_context=True) def comment_form(context, model, object_id, next): """ comment_form() is responsible for rendering the comment form """ # clear sessions from variable incase it was found content_type = ContentType.objects.get_for_model(model) try: request = context['request'] if request.session.get('comment_form', False): form = CommentForm(request.session['comment_form']) form.fields['content_type'].initial = 15 form.fields['object_id'].initial = 2 form.fields['next'].initial = next else: form = CommentForm(initial={ 'content_type' : content_type.id, 'object_id' : object_id, 'next' : next }) except Exception as e: logging.error(str(e)) form = None return { 'form' : form } comment/view.py def save_comment(request): """ save_comment: """ if request.method == 'POST': # clear sessions from variable incase it was found if request.session.get('comment_form', False): del request.session['comment_form'] form = CommentForm(request.POST) if form.is_valid(): obj = form.save(commit=False) if request.user.is_authenticated(): obj.created_by = request.user obj.save() messages.info(request, _('Your comment has been posted.')) return redirect(form.data.get('next')) else: request.session['comment_form'] = request.POST return redirect(form.data.get('next')) else: raise Http404 the usage is by loading the template tag and firing {% comment_form article article.id article.get_absolute_url %} my doubt is if am doing the correct approach or not by passing the validated form to the session. Would that be a problem? security risk? performance issues? Please advise Update In response to Pol question. The reason why I went with this approach is because comment form is handled in a separate app. In my scenario, I render objects such as article and all I do is invoke the templatetag to render the form. What would be an alternative approach for my case? You also shared with me the django comment app, which am aware of but the client am working with requires a lot of complex work to be done in the comment app thats why am working on a new one.

    Read the article

  • java.io.StreamCorruptedException: invalid stream header: 7371007E

    - by Alex
    Hello, this is pprobably a simple question . I got a client Server application which communicate using objects. when I send only one object from the client to server all works well. when I attempt to send several objects one after another on the same stream I get StreamCorruptedException. can some one direct me to the cause of this error . Thanks client write method private SecMessage[] send(SecMessage[] msgs) { SecMessage result[]=new SecMessage[msgs.length]; Socket s=null; ObjectOutputStream objOut =null; ObjectInputStream objIn=null; try { s=new Socket("localhost",12345); objOut=new ObjectOutputStream( s.getOutputStream()); for (SecMessage msg : msgs) { objOut.writeObject(msg); } objOut.flush(); objIn=new ObjectInputStream(s.getInputStream()); for (int i=0;i<result.length;i++) result[i]=(SecMessage)objIn.readObject(); } catch(java.io.IOException e) { alert(IO_ERROR_MSG+"\n"+e.getMessage()); } catch (ClassNotFoundException e) { alert(INTERNAL_ERROR+"\n"+e.getMessage()); } finally { try {objIn.close();} catch (IOException e) {} try {objOut.close();} catch (IOException e) {} } return result; } server read method //in is an inputStream Defined in the server SecMessage rcvdMsgObj; rcvdMsgObj=(SecMessage)new ObjectInputStream(in).readObject(); return rcvdMsgObj; and the SecMessage Class is public class SecMessage implements java.io.Serializable { private static final long serialVersionUID = 3940341617988134707L; private String cmd; //... nothing interesting here , just a bunch of fields , getter and setters }

    Read the article

  • How is a referencing environment generally implemented for closures?

    - by Alexandr Kurilin
    Let's say I have a statically/lexically scoped language with deep binding and I create a closure. The closure will consist of the statements I want executed plus the so called referencing environment, or, to quote this post, the collection of variables which can be used. What does this referencing environment actually look like implementation-wise? I was recently reading about ObjectiveC's implementation of blocks, and the author suggests that behind the scenes you get a copy of all of the variables on the stack and also of all the references to heap objects. The explanation claims that you get a "snapshot" of the referencing environment at the point in time of the closure's creation. Is that more or less what happens, or did I misread that? Is anything done to "freeze" a separate copy of the heap objects, or is it safe to assume that if they get modified between closure creation and the closure executing, the closure will no longer be operating on the original version of the object? If indeed there's copying being made, are there memory usage considerations in situations where one might want to create plenty of closures and store them somewhere? I think that misunderstanding of some of these concepts might lead to tricky issues like the ones Eric Lippert mentions in this blog post. It's interesting because you'd think that it wouldn't make sense to keep a reference to a value type that might be gone by the time the closure is called, but I'm guessing that in C# the compiler will figure out that the variable is needed later and put it into the heap instead. It seems that in most memory-managed languages everything is a reference and thus ObjectiveC is a somewhat unique situation with having to deal with copying what's on the stack.

    Read the article

  • UnauthorizedAccessException in ComRegisterFunction when accessing registry on Win 7 64.

    - by sanbornc
    I have a [ComRegisterFunction] that I am using to register a BHO Internet explorer extension. During registration on 64-bit windows 7 machines, a UnauthorizedAccessException is thrown on the call to subKey.SetValue("NoExplorer", 1). The registry appears to have BHO's located @ \HKLM\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\explorer\Browser Helper Objects, however, I get them same exception when trying to register there. Any Help would be appreciated. [ComRegisterFunction] public static void RegisterBho(Type type) { string BhoKeyName= "Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Browser Helper Objects"; RegistryKey registryKey = Registry.LocalMachine.OpenSubKey(BhoKeyName, true) ?? Registry.LocalMachine.CreateSubKey(BhoKeyName); if(registryKey == null) throw new ApplicationException("Unable to register Bho"); registryKey.Flush(); string guid = type.GUID.ToString("B"); RegistryKey subKey = registryKey.OpenSubKey(guid) ?? registryKey.CreateSubKey(guid); if (subKey == null) throw new ApplicationException("Unable to register Bho"); subKey.SetValue("NoExplorer", 1); registryKey.Close(); subKey.Close(); }

    Read the article

  • How to read data from file(.dat) in append mode

    - by govardhan
    We have an application which requires us to read data from a file (.dat) dynamically using deserialization. We are actually getting first object and it throws null pointer exception and "java.io.StreamCorruptedException:invalid type code:AC" when we are accessing other objects using a "for" loop. File file=null; FileOutputStream fos=null; BufferedOutputStream bos=null; ObjectOutputStream oos=null; try{ file=new File("account4.dat"); fos=new FileOutputStream(file,true); bos=new BufferedOutputStream(fos); oos=new ObjectOutputStream(bos); oos.writeObject(m); System.out.println("object serialized"); amlist=new MemberAccountList(); oos.close(); } catch(Exception ex){ ex.printStackTrace(); } Reading objects: try{ MemberAccount m1; file=new File("account4.dat");//add your code here fis=new FileInputStream(file); bis=new BufferedInputStream(fis); ois=new ObjectInputStream(bis); System.out.println(ois.readObject()); **while(ois.readObject()!=null){ m1=(MemberAccount)ois.readObject(); System.out.println(m1.toString()); }/*mList.addElement(m1);** // Here we have the issue throwing null pointer exception Enumeration elist=mList.elements(); while(elist.hasMoreElements()){ obj=elist.nextElement(); System.out.println(obj.toString()); }*/ } catch(ClassNotFoundException e){ } catch(EOFException e){ System.out.println("end"); } catch(Exception ex){ ex.printStackTrace(); }

    Read the article

  • How to generate lots of redundant ajax elements like checkboxes and pulldowns in Django?

    - by iJames
    Hello folks. I've been getting lots of answers from stackoverflow now that I'm in Django just be searching. Now I hope my question will also create some value for everybody. In choosing Django, I was hoping there was some similar mechanism to the way you can do partials in ROR. This was going to help me in two ways. One was in generating repeating indexed forms or form elements, and also in rendering only a piece of the page on the round trip. I've done a little bit of that by using taconite with a simple URL click but now I'm trying to get more advanced. This will focus on the form issue which boils down to how to iterate over a secondary object. If I have a list of photo instances, each of which has a couple of parameters, let's say a size and a quantity. I want to generate form elements for each photo instance separately. But then I have two lists I want to iterate on at the same time. Context: photos : Photo.objects.all() and forms = {} for photo in photos: forms[photo.id] = PhotoForm() In other words we've got a list of photo objects and a dict of forms based on the photo.id. Here's an abstraction of the template: {% for photo in photos %} {% include "photoview.html" %} {% comment %} So here I want to use the photo.id as an index to get the correct form. So that each photo has its own form. I would want to have a different action and each form field would be unique. Is that possible? How can I iterate on that? Thanks! {% endcomment %} Quantity: {{ oi.quantity }} {{ form.quantity }} Dimensions: {{ oi.size }} {{ form.size }} {% endfor %} What can I do about this simple case. And how can I make it where every control is automatically updating the server instead of using a form at all? Thanks! James

    Read the article

  • Update SQL Server 2000 to SQL Server 2008: Benefits please?

    - by Ciaran Archer
    Hi there I'm looking for the benefits of upgrading from SQL Server 2000 to 2008. I was wondering: What database features can we leverage with 2008 that we can't now? What new TSQL features can we look forward to using? What performance benefits can we expect to see? What else will make management go for it? And the converse: What problems can we expect to encounter? What other problems have people found when migrating? Why fix something that isn't (technically) broken? We work in a Java shop, so any .NET / CLR stuff won't rock our world. We also use Eclipse as our main development so any integration with Visual Studio won't be a plus. We do use SQL Server Management Studio however. Some background: Our main database machine is a 32bit Dell Intel Xeon MP CPU 2.0GHz, 40MB of RAM with Physical Address Extension running Windows Server 2003 Enterprise Edition. We will not be changing our hardware. Our databases in total are under a TB with some having more than 200 tables. But they are busy and during busy times we see 60-80% CPU utilisation. Apart form the fact that SQL Server 2000 is coming close to end of life, why should we upgrade? Any and all contributions are appreciated!

    Read the article

  • WCF code generation for large/complex schema (HR-XML/OAGIS) - is there an alternative?

    - by Sasha Borodin
    Hello, and thank you for reading. I am implementing a WCF Service based on a predefined specification (HR-XML 3.0). As such, I am starting with the schema, and working my way back to code. There are a number of large Schema documents (which import yet more Schema documents) related to my implementation, provided by this specification. I am able to generate code using xsd.exe, by supplying the "main" and "supporting" xsd files as arguments. But there are several issues, and I am wondering if this is the right approach. there are litterally hundreds of classes - the code file is half a meg in size duplicate classes (ex. Type, Type1 - which both represent the same type) there are classes declared as inheriting from a base class, but that base class is not generated/defined I understand that there are limitations to the types of Schema supported by svcutil.exe/xsd.exe when targeting the DataContractSerializer and even XmlSerializer. My question is two-fold: Are code generation "issues" fairly common when dealing with larger, modular xsd files? Has anyone had success with generating data contracts from OAGIS or HR-XML schema? Given the above issues, are there better approaches to this task, avoiding generating code and working with concrete objects? Does it make better sence to read and compose a SOAP message directly, while still taking advantage of the rest of the WCF framework? I understand that I am loosing the convenience of working with .NET objects, and the framekwork-provided (de)serialization; given these losses, would it still be advantageous to base my Service on WCF? Is there some "middle ground" between working with .NET types and pure XML? Thank you very much! -Sasha Borodin DFWHC.org

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >