Search Results

Search found 3786 results on 152 pages for 'instances'.

Page 139/152 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • JSF inner datatable not respecting rendered condition of outer table.

    - by Marc
    <h:dataTable cellpadding="0" cellspacing="0" styleClass="list_table" id="OuterItems" value="#{valueList.values}" var="item" border="0"> <h:column rendered="#{item.typeA"> <h:dataTable cellpadding="0" cellspacing="0" styleClass="list_table" id="InnerItems" value="#{item.options}" var="option" border="0"> <h:column > <h:outputText value="Option: #{option.displayValue}"/> </h:column> </h:dataTable> </h:column> <h:column rendered="#{item.typeB"> <h:dataTable cellpadding="0" cellspacing="0" styleClass="list_table" id="InnerItems" value="#{item.demands}" var="demand" border="0"> <h:column > <h:outputText value="Demand: #{demand.displayValue}"/> </h:column> </h:dataTable> </h:column> </h:dataTable> public class Item{ ... public boolean isTypeA(){ return this instanceof TypeA; } public boolean isTypeB(){ return this instanceof TypeB; } ... } public class typeA extends Item(){ ... public List getOptions(){ .... } ... } public class typeB extends Item(){ ... public List getDemands(){ ... } .... } I'm having an issue with JSF. I've abstracted the problem out here, and I'm hoping someone can help me understand how what I'm doing fails. I'm looping over a list of Items. These Items are actually instances of the subclasses TypeA and TypeB. For Type A, I want to display the options, for Type B I want to display the demands. When rendering the page for the first time, this works fine. However, when I post back to the page for some action, I get an error: [3/26/10 12:52:32:781 EST] 0000008c SystemErr R javax.faces.FacesException: Error getting property 'options' from bean of type TypeB at com.sun.faces.lifecycle.ApplyRequestValuesPhase.execute(ApplyRequestValuesPhase.java:89) at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java(Compiled Code)) at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:91) at com.ibm.faces.portlet.FacesPortlet.processAction(FacesPortlet.java:193) My grasp on the JSF lifecyle is very rough. At this point, i understand there is an error in the ApplyRequestValues Phases which is very early and so the previous state is restored and nothing changes. What I don't understand is that in order to fufill the condition for rendering "item.typeA" that object has to be an instance of TypeA. But here, it looks like that object passed the condition even though it was an instance of TypeB. It is like it is evaluating the inner dataTable (InnerItems) before evaluating the outer (outerItems). My working assumption is that I just don't understand how/when the rendered attribute is actually evaluated.

    Read the article

  • How to set text above and below a JButton icon?

    - by mre
    I want to set text above and below a JButton's icon. At the moment, in order to achieve this, I override the layout manager and use three JLabel instances (i.e. 2 for text and 1 for the icon). But this seems like a dirty solution. Is there a more direct way of doing this? Note -I'm not looking for a multi-line solution, I'm looking for a multi-label solution. Although this article refers to it as a multi-line solution, it actually seems to refer to a multi-label solution. EXAMPLE import java.awt.Component; import java.awt.FlowLayout; import javax.swing.BoxLayout; import javax.swing.Icon; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.SwingUtilities; import javax.swing.UIManager; public final class JButtonDemo { public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable(){ @Override public void run() { createAndShowGUI(); } }); } private static void createAndShowGUI(){ final JFrame frame = new JFrame(); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setLayout(new FlowLayout()); frame.add(new JMultiLabelButton()); frame.pack(); frame.setLocationRelativeTo(null); frame.setVisible(true); } private static final class JMultiLabelButton extends JButton { private static final long serialVersionUID = 7650993517602360268L; public JMultiLabelButton() { super(); setLayout(new BoxLayout(this, BoxLayout.Y_AXIS)); add(new JCenterLabel("Top Label")); add(new JCenterLabel(UIManager.getIcon("OptionPane.informationIcon"))); add(new JCenterLabel("Bottom Label")); } } private static final class JCenterLabel extends JLabel { private static final long serialVersionUID = 5502066664726732298L; public JCenterLabel(final String s) { super(s); setAlignmentX(Component.CENTER_ALIGNMENT); } public JCenterLabel(final Icon i) { super(i); setAlignmentX(Component.CENTER_ALIGNMENT); } } }

    Read the article

  • question about book example - Java Concurrency in Practice, Listing 4.12

    - by mike
    Hi, I am working through an example in Java Concurrency in Practice and am not understanding why a concurrent-safe container is necessary in the following code. I'm not seeing how the container "locations" 's state could be modified after construction; so since it is published through an 'unmodifiableMap' wrapper, it appears to me that an ordinary HashMap would suffice. EG, it is accessed concurrently, but the state of the map is only accessed by readers, no writers. The value fields in the map are syncronized via delegation to the 'SafePoint' class, so while the points are mutable, the keys for the hash, and their associated values (references to SafePoint instances) in the map never change. I think my confusion is based on what precisely the state of the collection is in the problem. Thanks!! -Mike Listing 4.12, Java Concurrency in Practice, (this listing available as .java here, and also in chapter form via google) /////////////begin code @ThreadSafe public class PublishingVehicleTracker { private final Map<String, SafePoint> locations; private final Map<String, SafePoint> unmodifiableMap; public PublishingVehicleTracker( Map<String, SafePoint> locations) { this.locations = new ConcurrentHashMap<String, SafePoint>(locations); this.unmodifiableMap = Collections.unmodifiableMap(this.locations); } public Map<String, SafePoint> getLocations() { return unmodifiableMap; } public SafePoint getLocation(String id) { return locations.get(id); } public void setLocation(String id, int x, int y) { if (!locations.containsKey(id)) throw new IllegalArgumentException( "invalid vehicle name: " + id); locations.get(id).set(x, y); } } // monitor protected helper-class @ThreadSafe public class SafePoint { @GuardedBy("this") private int x, y; private SafePoint(int[] a) { this(a[0], a[1]); } public SafePoint(SafePoint p) { this(p.get()); } public SafePoint(int x, int y) { this.x = x; this.y = y; } public synchronized int[] get() { return new int[] { x, y }; } public synchronized void set(int x, int y) { this.x = x; this.y = y; } } ///////////end code

    Read the article

  • WLI domain with 3 servers - issues on JPD process startup

    - by XpiritO
    Hi there. I'm currently working on a clustered WLI environment which comprehends 3 servers: 1 admin server ("AdminServer") and 2 managed servers ("mn1" and "mn2") grouped as a cluster, as follows: Architecture diagram: http://img72.imageshack.us/img72/4112/clusterdiagram.jpg I've developed a JPD process to execute some scheduled tasks, invoked using a Message Broker. I've deployed this project into a single-server WLI domain (with AdminServer only) and it works as expected: the JPD process is invoked (I've configured a Timer Event Generator instance to start it up). Message broker: http://img532.imageshack.us/img532/1443/wlimessagebroker.jpg Timer event generator: http://img408.imageshack.us/img408/7358/wlitimereventgenerator.jpg In order to achieve fail-over and load-balancing capabilities, I'm currently trying to deploy this JPD process into this clustered WLI environment. Although, I'm having some issues with this, as I cannot get it to work properly, even if it still works. Here is a screenshot of the "WLI Process Instance Monitor" (with AdminServer and mn1 instances up and running): http://img710.imageshack.us/img710/8477/wliprocessinstancemonit.jpg According to this screen the process seems to be running, as it shows in this instance monitor screen. However, I don't see any output coming out neither at AdminServer console or mn1 console. In single-server domain it was visible output from JPD process "timeout" callback method, wich implementation is shown below: @com.bea.wli.control.broker.MessageBroker.StaticSubscription(xquery = "", filterValueMatch = "", channelName = "/SamplePrefix/Samples/SampleStringChannel", messageBody = "{x0}") public void subscription(java.lang.String x0) { String toReturn=""; try { Context myCtx = new InitialContext(); MBeanHome mbeanHome = (MBeanHome)myCtx.lookup("weblogic.management.home.localhome"); toReturn=mbeanHome.getMBeanServer().getServerName(); System.out.println("**** executed at **** " + System.currentTimeMillis() + " by: " + toReturn); } catch (Exception e) { System.out.println("Exception!"); e.printStackTrace(); } } (...) @org.apache.beehive.controls.api.events.EventHandler(field = "myT", eventSet = com.bea.control.WliTimerControl.Callback.class, eventName = "onTimeout") public void myT_onTimeout(long time, java.io.Serializable data) { // #START: CODE GENERATED - PROTECTED SECTION - you can safely add code above this comment in this method. #// // input transform System.out.println("**** published at **** " + System.currentTimeMillis()); publishControl.publish("aaaa"); // parameter assignment // #END : CODE GENERATED - PROTECTED SECTION - you can safely add code below this comment in this method. #// } and here is the output visible at "AdminServer" console in single-server domain testing: **** published at **** 1273238090713 **** executed at **** 1273238132123 by: AdminServer **** published at **** 1273238152462 **** executed at **** 1273238152562 by: AdminServer (...) What may be wrong with my clustered configuration? Am I missing something to accomplish clustered deployment? Thanks in advance for your help.

    Read the article

  • Objective-c design advice for use of different data sources, swapping between test and live

    - by user200341
    I'm in the process of designing an application that is part of a larger piece of work, depending on other people to build an API that the app can make use of to retrieve data. While I was thinking about how to setup this project and design the architecture around it, something occurred to me, and I'm sure many people have been in similar situations. Since my work is depending on other people to complete their tasks, and a test server, this slows work down at my end. So the question is: What's the best practice for creating test repositories and classes, implementing them, and not having to depend on altering several places in the code to swap between the test classes and the actual repositories / proper api calls. Contemplate the following scenario: GetDataFromApiCommand *getDataCommand = [[GetDataFromApiCommand alloc]init]; getDataCommand.delegate = self; [getDataCommand getData]; Once the data is available via the API, "GetDataFromApiCommand" could use the actual API, but until then a set of mock data could be returned upon the call of [getDataCommand getData] There might be multiple instances of this, in various places in the code, so replacing all of them wherever they are, is a slow and painful process which inevitably leads to one or two being overlooked. In strongly typed languages we could use dependency injection and just alter one place. In objective-c a factory pattern could be implemented, but is that the best route to go for this? GetDataFromApiCommand *getDataCommand = [GetDataFromApiCommandFactory buildGetDataFromApiCommand]; getDataCommand.delegate = self; [getDataCommand getData]; What is the best practices to achieve this result? Since this would be most useful, even if you have the actual API available, to run tests, or work off-line, the ApiCommands would not necessarily have to be replaced permanently, but the option to select "Do I want to use TestApiCommand or ApiCommand". It is more interesting to have the option to switch between: All commands are test and All command use the live API, rather than selecting them one by one, however that would also be useful to do for testing one or two actual API commands, mixing them with test data. EDIT The way I have chosen to go with this is to use the factory pattern. I set up the factory as follows: @implementation ApiCommandFactory + (ApiCommand *)newApiCommand { // return [[ApiCommand alloc]init]; return [[ApiCommandMock alloc]init]; } @end And anywhere I want to use the ApiCommand class: GetDataFromApiCommand *getDataCommand = [ApiCommandFactory newApiCommand]; When the actual API call is required, the comments can be removed and the mock can be commented out. Using new in the message name implies that who ever uses the factory to get an object, is responsible for releasing it (since we want to avoid autorelease on the iPhone). If additional parameters are required, the factory needs to take these into consideration i.e: [ApiCommandFactory newSecondApiCommand:@"param1"]; This will work quite well with repositories as well.

    Read the article

  • HttpWebResponse get mixed up when used inside multiple threads

    - by Holli
    In my Application I have a few threads who will get data from a web service. Basically I just open an URL and get an XML output. I have a few threads who do this continuously but with different URLs. Sometimes the results are mixed up. The XML output doesn't belong to the URL of a thread but to the URL of another thread. In each thread I create an instance of the class GetWebPage and call the method Get from this instance. The method is very simple and based mostly on code from the MSDN documentation. (See below. I removed my error handling here!) public string Get(string userAgent, string url, string user, string pass, int timeout, int readwriteTimeout, WebHeaderCollection whc) { string buffer = string.Empty; HttpWebRequest myWebRequest = (HttpWebRequest)WebRequest.Create(url); if (!string.IsNullOrEmpty(userAgent)) myWebRequest.UserAgent = userAgent; myWebRequest.Timeout = timeout; myWebRequest.ReadWriteTimeout = readwriteTimeout; myWebRequest.Credentials = new NetworkCredential(user, pass); string[] headers = whc.AllKeys; foreach (string s in headers) { myWebRequest.Headers.Add(s, whc.Get(s)); } using (HttpWebResponse myWebResponse = (HttpWebResponse)myWebRequest.GetResponse()) { using (Stream ReceiveStream = myWebResponse.GetResponseStream()) { Encoding encode = Encoding.GetEncoding("utf-8"); StreamReader readStream = new StreamReader(ReceiveStream, encode); // Read 1024 characters at a time. Char[] read = new Char[1024]; int count = readStream.Read(read, 0, 1024); int break_counter = 0; while (count > 0 && break_counter < 10000) { String str = new String(read, 0, count); buffer += str; count = readStream.Read(read, 0, 1024); break_counter++; } } } return buffer; As you can see I have no public properties or any other shared resources. At least I don't see any. The url is the service I call in the internet and buffer is the XML Output from the server. Like I said I have multiple instances of this class/method in a few threads (10 to 12) and sometimes buffer does not belong the the url of the same thread but another thread.

    Read the article

  • CoreData : App crashes when deleting last instance created

    - by Leo
    Hello, I have a 2 tabs application. In the first one, I'm creating objects of the "Sample" and "SampleList" entities. Each sampleList contains an ID and a set of samples. Each sample contains a date and temperature property. In the second tab, I'm displaying my data in a tableView. I implemented the - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath method in order to delete SampleLists. In my xcdatamodel the delete rule for my relationship between SampleList and Sample is Cascade. My problem is that when I try to delete SampleList I just created, the app crashes and I receive a EXC_BAD_ACCESS signal. If I restart it, then I'm able to delete "old" sampleList without any problems. Earlier, I had the following problem : I couldn't display the the sampleLists I created since I launched the app, because it crashed too. I received also the EXC_BAD_ACCESS signal. Actually, it seemed that the date of the last sample created of the set was nil. If I am not releasing the NSDate I'm using to set the sample's date, I don't have this problem anymore... If anyone could help me to find out what could cause my troubles it would be great !! Here is the method I'm using to create new instances : SampleList *newSampleList = (SampleList *)[NSEntityDescription insertNewObjectForEntityForName:@"SampleList" inManagedObjectContext:managedObjectContext]; [newSampleList setPatchID:patchID]; NSMutableSet *newSampleSet = [[NSMutableSet alloc] init]; NSCalendar *gregorian = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; for (int i = 0; i < [byteArray count]; i=i+4, sampleCount++) { NSDateComponents *comps = [[NSDateComponents alloc] init]; [comps setYear:year]; [comps setMonth:month]; [comps setDay:day]; [comps setHour:hours]; [comps setMinute:minutes]; NSDate *sampleDate = [gregorian dateFromComponents:comps]; Sample *newSample = (Sample *)[NSEntityDescription insertNewObjectForEntityForName:@"Sample" inManagedObjectContext:managedObjectContext]; [newSample setSampleDate:sampleDate]; [newSample setSampleTemperature:[NSNumber numberWithInt:temperature]]; [newSampleSet addObject:newSample]; [comps release]; //[sampleDate release]; } [newSampleList setSampleSet:newSampleSet]; // [newSampleSet release]; NSError *error; if (![managedObjectContext save:&error]) { NSLog(@"Could not Save the context !!"); } [gregorian release];

    Read the article

  • In a PHP project, how do you organize and access your helper objects?

    - by Pekka
    How do you organize and manage your helper objects like the database engine, user notification, error handling and so on in a PHP based, object oriented project? Say I have a large PHP CMS. The CMS is organized in various classes. A few examples: the database object user management an API to create/modify/delete items a messaging object to display messages to the end user a context handler that takes you to the right page a navigation bar class that shows buttons a logging object possibly, custom error handling etc. I am dealing with the eternal question, how to best make these objects accessible to each part of the system that needs it. my first apporach, many years ago was to have a $application global that contained initialized instances of these classes. global $application; $application->messageHandler->addMessage("Item successfully inserted"); I then changed over to the Singleton pattern and a factory function: $mh =&factory("messageHandler"); $mh->addMessage("Item successfully inserted"); but I'm not happy with that either. Unit tests and encapsulation become more and more important to me, and in my understanding the logic behind globals/singletons destroys the basic idea of OOP. Then there is of course the possibility of giving each object a number of pointers to the helper objects it needs, probably the very cleanest, resource-saving and testing-friendly way but I have doubts about the maintainability of this in the long run. Most PHP frameworks I have looked into use either the singleton pattern, or functions that access the initialized objects. Both fine approaches, but as I said I'm happy with neither. I would like to broaden my horizon on what is possible here and what others have done. I am looking for examples, additional ideas and pointers towards resources that discuss this from a long-term, real-world perspective. Also, I'm interested to hear about specialized, niche or plain weird approaches to the issue. Bounty I am following the popular vote in awarding the bounty, the answer which is probably also going to give me the most. Thank you for all your answers!

    Read the article

  • Configuration Element Collection Section

    - by Matt
    I would like to set up a custom app configuration element collection section like this <logSectionGroup> <logSection name="Testttt"> <properties name ="Pride"> <pathName="TestingLog.txt"/> <deleteRetention="100"/> <deleteZeroRetention="5"/> <wildcard="*.txt"/> </properties> <properties name ="Adhoc"> <pathName="blah.txt"/> <deleteRetention="70"/> <deleteZeroRetention="3"/> <wildcard="*.*"/> </properties> </logSection> </logSectionGroup> Is this possible? Properties would be the configuration element, and log section would be the configuration element collection. The problem is, I've only seen where you can have multiple instances of a single element instead of multiple elements. <Section name="Section1"> <Section name="Section1"> <SubSection name="SubSection1"> <Item name="Item1" /> <Item name="Item2" /> </SubSection> <SubSection name="SubSection2"> <Item name="Item1" /> <Item name="Item2" /> </SubSection> </Section> When you use GetElementKey() you have it return element "name" in the above example but how would you return 4 different elements like "pathName" "deleteRetention" etc. Here is my Definition for PropElement Public Class PropElement Inherits ConfigurationElement <ConfigurationProperty("pathName", IsRequired:=True)> _ Public Property PathName() As String Get Return CStr(Me("pathName")) End Get Set(ByVal value As String) Me("pathName") = value End Set End Property <ConfigurationProperty("deleteRetention", DefaultValue:="0", IsRequired:=False)> _ Public Property DeleteRetention() As Integer Get Return CStr(Me("deleteRetention")) End Get Set(ByVal value As Integer) Me("deleteRetention") = value End Set End Property <ConfigurationProperty("deleteZeroRetention", DefaultValue:="0", IsRequired:=False)> _ Public Property DeleteZeroRetention() As Integer Get Return CStr(Me("deleteZeroRetention")) End Get Set(ByVal value As Integer) Me("deleteZeroRetention") = value End Set End Property <ConfigurationProperty("wildcard", DefaultValue:="*.*", IsRequired:=False)> _ Public Property Wildcard() As String Get Return CStr(Me("wildcard")) End Get Set(ByVal value As String) Me("wildcard") = value End Set End Property End Class

    Read the article

  • How does Sentry aggregate errors?

    - by Hugo Rodger-Brown
    I am using Sentry (in a django project), and I'd like to know how I can get the errors to aggregate properly. I am logging certain user actions as errors, so there is no underlying system exception, and am using the culprit attribute to set a friendly error name. The message is templated, and contains a common message ("User 'x' was unable to perform action because 'y'"), but is never exactly the same (different users, different conditions). Sentry clearly uses some set of attributes under the hood to determine whether to aggregate errors as the same exception, but despite having looked through the code, I can't work out how. Can anyone short-cut my having to dig further into the code and tell me what properties I need to set in order to manage aggregation as I would like? [UPDATE 1: event grouping] This line appears in sentry.models.Group: class Group(MessageBase): """ Aggregated message which summarizes a set of Events. """ ... class Meta: unique_together = (('project', 'logger', 'culprit', 'checksum'),) ... Which makes sense - project, logger and culprit I am setting at the moment - the problem is checksum. I will investigate further, however 'checksum' suggests that binary equivalence, which is never going to work - it must be possible to group instances of the same exception, with differenct attributes? [UPDATE 2: event checksums] The event checksum comes from the sentry.manager.get_checksum_from_event method: def get_checksum_from_event(event): for interface in event.interfaces.itervalues(): result = interface.get_hash() if result: hash = hashlib.md5() for r in result: hash.update(to_string(r)) return hash.hexdigest() return hashlib.md5(to_string(event.message)).hexdigest() Next stop - where do the event interfaces come from? [UPDATE 3: event interfaces] I have worked out that interfaces refer to the standard mechanism for describing data passed into sentry events, and that I am using the standard sentry.interfaces.Message and sentry.interfaces.User interfaces. Both of these will contain different data depending on the exception instance - and so a checksum will never match. Is there any way that I can exclude these from the checksum calculation? (Or at least the User interface value, as that has to be different - the Message interface value I could standardise.) [UPDATE 4: solution] Here are the two get_hash functions for the Message and User interfaces respectively: # sentry.interfaces.Message def get_hash(self): return [self.message] # sentry.interfaces.User def get_hash(self): return [] Looking at these two, only the Message.get_hash interface will return a value that is picked up by the get_checksum_for_event method, and so this is the one that will be returned (hashed etc.) The net effect of this is that the the checksum is evaluated on the message alone - which in theory means that I can standardise the message and keep the user definition unique. I've answered my own question here, but hopefully my investigation is of use to others having the same problem. (As an aside, I've also submitted a pull request against the Sentry documentation as part of this ;-)) (Note to anyone using / extending Sentry with custom interfaces - if you want to avoid your interface being use to group exceptions, return an empty list.)

    Read the article

  • how to develop a program to minimize errors in human transcription of hand written surveys

    - by Alex. S.
    I need to develop custom software to do surveys. Questions may be of multiple choice, or free text in a very few cases. I was asked to design a subsystem to check if there is any error in the manual data entry for the multiple choices part. We're trying to speed up the user data entry process and to minimize human input differences between digital forms and the original questionnaires. The surveys are filled with handwritten marks and text by human interviewers, so it's possible to find hard to read marks, or also the user could accidentally select a different value in some question, and we would like to avoid that. The software must include some automatic control to detect possible typing differences. Each answer of the multiple choice questions has the same probability of being selected. This question has two parts: The GUI. The most simple thing I have in mind is to implement the most usable design of the questions display: use of large and readable fonts and space generously the choices. Is there something else? For faster input, I would like to use drop down lists (favoring keyboard over mouse). Given the questions are grouped in sections, I would like to show the answers selected for the questions of that section, but this could slow down the process. Any other ideas? The error checking subsystem. What else can I do to minimize or to check human typos in the multiple choice questions? Is this a solvable problem? is there some statistical methodology to check values that were entered by the users are the same from the hand filled forms? For example, let's suppose the survey has 5 questions, and each has 4 options. Let's say I have n survey forms filled in paper by interviewers, and they're ready to be entered in the software, then how to minimize the accidental differences that can have the manual transcription of the n surveys, without having to double check everything in the 5 questions of the n surveys? My first suggestion is that at the end of the processing of all the hand filled forms, the software could choose some forms randomly to make a double check of the responses in a few instances, but on what criteria can I make this selection? This validation would be enough to cover everything in a significant way? The actual survey is nation level and it has 56 pages with over 200 questions in total, so it will be a lot of hand written pages by many people, and the intention is to reduce the likelihood of errors and to optimize speed in the data entry process. The surveys must filled in paper first, given the complications of taking laptops or handhelds with the interviewers.

    Read the article

  • Class template specializations with shared functionality

    - by Thomas
    I'm writing a simple maths library with a template vector type: template<typename T, size_t N> class Vector { public: Vector<T, N> &operator+=(Vector<T, N> const &other); // ... more operators, functions ... }; Now I want some additional functionality specifically for some of these. Let's say I want functions x() and y() on Vector<T, 2> to access particular coordinates. I could create a partial specialization for this: template<typename T> class Vector<T, 3> { public: Vector<T, 3> &operator+=(Vector<T, 3> const &other); // ... and again all the operators and functions ... T x() const; T y() const; }; But now I'm repeating everything that already existed in the generic template. I could also use inheritance. Renaming the generic template to VectorBase, I could do this: template<typename T, size_t N> class Vector : public VectorBase<T, N> { }; template<typename T> class Vector<T, 3> : public VectorBase<T, 3> { public: T x() const; T y() const; }; However, now the problem is that all operators are defined on VectorBase, so they return VectorBase instances. These cannot be assigned to Vector variables: Vector<float, 3> v; Vector<float, 3> w; w = 5 * v; // error: no conversion from VectorBase<float, 3> to Vector<float, 3> I could give Vector an implicit conversion constructor to make this possible: template<typename T, size_t N> class Vector : public VectorBase<T, N> { public: Vector(VectorBase<T, N> const &other); }; However, now I'm converting from Vector to VectorBase and back again. Even though the types are the same in memory, and the compiler might optimize all this away, it feels clunky and I don't really like to have potential run-time overhead for what is essentially a compile-time problem. Is there any other way to solve this?

    Read the article

  • Java Swing: JWindow appears behind all other process windows, and will not disappear

    - by Kim Jong Woo
    I am using JWindow to display my splash screen during the application start up. however it will not appear in front of all windows as it should, and it will not disappear as well. import java.awt.BorderLayout; import java.awt.Color; import java.awt.Dimension; import java.awt.Font; import java.awt.Toolkit; import javax.swing.BorderFactory; import javax.swing.ImageIcon; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JWindow; public class MySplash { public static MySplash INSTANCE; private static JWindow jw; public MySplash(){ createSplash(); } private void createSplash() { jw = new JWindow(); JPanel content = (JPanel) jw.getContentPane(); content.setBackground(Color.white); // Set the window's bounds, centering the window int width = 328; int height = 131; Dimension screen = Toolkit.getDefaultToolkit().getScreenSize(); int x = (screen.width - width) / 2; int y = (screen.height - height) / 2; jw.setBounds(x, y, width, height); // Build the splash screen JLabel label = new JLabel(new ImageIcon("splash.jpg")); JLabel copyrt = new JLabel("SplashScreen Test", JLabel.CENTER); copyrt.setFont(new Font("Sans-Serif", Font.BOLD, 12)); content.add(label, BorderLayout.CENTER); content.add(copyrt, BorderLayout.SOUTH); Color oraRed = new Color(156, 20, 20, 255); content.setBorder(BorderFactory.createLineBorder(oraRed, 0)); } public synchronized static MySplash getInstance(){ if(INSTANCE==null){ INSTANCE = new MySplash(); } return INSTANCE; } public void showSplash(){ jw.setAlwaysOnTop(true); jw.toFront(); jw.setVisible(true); return; } public void hideSplash(){ jw.setAlwaysOnTop(false); jw.toBack(); jw.setVisible(false); return; } } So in my main class which extends JFrame, I call my splash screen by SwingUtilities.invokeLater(new Runnable(){ @Override public void run() { MySplash.getInstance().showSplash(); } }); However, the JWindow appears behind the all open instances of windows on my computer. Hiding the JWindow also doesn't work. SwingUtilities.invokeLater(new Runnable(){ @Override public void run() { MySplash.getInstance().hideSplash(); } });

    Read the article

  • Greasemonkey is getting an empty document.body on select Google pages.

    - by Brock Adams
    Hi, I have a Greasemonkey script that processes Google search results. But it's failing in a few instances, when xpath searches (and document body) appear to be empty. Running the code in Firebug's console works every time. It only fails in a Greasemonkey script. Greasemonkey sees an empty document.body. I've boiled the problem down to a test, greasemonkey script, below. I'm using Firefox 3.5.9 and Greasemonkey 0.8.20100408.6 (but earlier versions had the same problem). Problem: Greasemonkey sees an empty document.body. Recipe to Duplicate: Install the Greasemonkey script. Open a new tab or window. Navigate to Google.com (http://www.google.com/). Search on a simple term like "cats". Check Firefox's Error console (Ctrl-shift-J) or Firebug's console. The script will report that document body is empty. Hit refresh. The script will show a good result (document body found). Note that the failure only reliably appears on Google results obtained this way, and on a new tab/window. Turn javascript off globally (javascript.enabled set to false in about:config). Repeat steps 2 thru 5. Only now the Greasemonkey script will work. It seems that Google javascript is killing the DOM tree for greasemonkey, somehow. I've tried a time-delayed retest and even a programmatic refresh; the script still fails to see the document body. Test Script: // // ==UserScript== // @name TROUBLESHOOTING 2 snippets // @namespace http://www.google.com/ // @description For code that has funky misfires and defies standard debugging. // @include http://*/* // ==/UserScript== // function LocalMain (sTitle) { var sUserMessage = ''; //var sRawHtml = unsafeWindow.document.body.innerHTML; //-- unsafeWindow makes no difference. var sRawHtml = document.body.innerHTML; if (sRawHtml) { sRawHtml = sRawHtml.replace (/^\s\s*/, ''). substr (0, 60); sUserMessage = sTitle + ', Doc body = ' + sRawHtml + ' ...'; } else { sUserMessage = sTitle + ', Document body seems empty!'; } if (typeof (console) != "undefined") { console.log (sUserMessage); } else { if (typeof (GM_log) != "undefined") GM_log (sUserMessage); else if (!sRawHtml) alert (sUserMessage); } } LocalMain ('Preload'); window.addEventListener ("load", function() {LocalMain ('After load');}, false);

    Read the article

  • deleting element objects of a std vector using erase : a) memory handling and b) better way?

    - by memC
    hi, I have a vec_A that stores instances of class A as: vec_A.push_back(A()); I want to remove some elements in the vector at a later stage and have two questions: a) The element is deleted as: vec_A.erase(iterator) Is there any additional code I need to add to make sure that there is no memory leak? . b) Assume that condition if(num <5) is if num is among a specific numberList. Given this, is there a better way to delete the elements of a vector than what I am illustrating below? #include<vector> #include<stdio.h> #include<iostream> class A { public: int getNumber(); A(int val); ~A(){}; private: int num; }; A::A(int val){ num = val; }; int A::getNumber(){ return num; }; int main(){ int i =0; int num; std::vector<A> vec_A; std::vector<A>::iterator iter; for ( i = 0; i < 10; i++){ vec_A.push_back(A(i)); } iter = vec_A.begin(); while(iter != vec_A.end()){ std::cout << "\n --------------------------"; std::cout << "\n Size before erase =" << vec_A.size(); num = iter->getNumber() ; std::cout << "\n num = "<<num; if (num < 5){ vec_A.erase(iter); } else{ iter++; } std::cout << "\n size after erase =" << vec_A.size(); } std::cout << "\nPress RETURN to continue..."; std::cin.get(); return 0; }

    Read the article

  • Converting C source to C++

    - by Barry Kelly
    How would you go about converting a reasonably large (300K), fairly mature C codebase to C++? The kind of C I have in mind is split into files roughly corresponding to modules (i.e. less granular than a typical OO class-based decomposition), using internal linkage in lieu private functions and data, and external linkage for public functions and data. Global variables are used extensively for communication between the modules. There is a very extensive integration test suite available, but no unit (i.e. module) level tests. I have in mind a general strategy: Compile everything in C++'s C subset and get that working. Convert modules into huge classes, so that all the cross-references are scoped by a class name, but leaving all functions and data as static members, and get that working. Convert huge classes into instances with appropriate constructors and initialized cross-references; replace static member accesses with indirect accesses as appropriate; and get that working. Now, approach the project as an ill-factored OO application, and write unit tests where dependencies are tractable, and decompose into separate classes where they are not; the goal here would be to move from one working program to another at each transformation. Obviously, this would be quite a bit of work. Are there any case studies / war stories out there on this kind of translation? Alternative strategies? Other useful advice? Note 1: the program is a compiler, and probably millions of other programs rely on its behaviour not changing, so wholesale rewriting is pretty much not an option. Note 2: the source is nearly 20 years old, and has perhaps 30% code churn (lines modified + added / previous total lines) per year. It is heavily maintained and extended, in other words. Thus, one of the goals would be to increase mantainability. [For the sake of the question, assume that translation into C++ is mandatory, and that leaving it in C is not an option. The point of adding this condition is to weed out the "leave it in C" answers.]

    Read the article

  • Patterns: Local Singleton vs. Global Singleton?

    - by Mike Rosenblum
    There is a pattern that I use from time to time, but I'm not quite sure what it is called. I was hoping that the SO community could help me out. The pattern is pretty simple, and consists of two parts: A singleton factory, which creates objects based on the arguments passed to the factory method. Objects created by the factory. So far this is just a standard "singleton" pattern or "factory pattern". The issue that I'm asking about, however, is that the singleton factory in this case maintains a set of references to every object that it ever creates, held within a dictionary. These references can sometimes be strong references and sometimes weak references, but it can always reference any object that it has ever created. When receiving a request for a "new" object, the factory first searches the dictionary to see if an object with the required arguments already exits. If it does, it returns that object, if not, it returns a new object and also stores a reference to the new object within the dictionary. This pattern prevents having duplicative objects representing the same underlying "thing". This is useful where the created objects are relatively expensive. It can also be useful where these objects perform event handling or messaging - having one object per item being represented can prevent multiple messages/events for a single underlying source. There are probably other reasons to use this pattern, but this is where I've found this useful. My question is: what to call this? In a sense, each object is a singleton, at least with respect to the data it contains. Each is unique. But there are multiple instances of this class, however, so it's not at all a true singleton. In my own personal terminology, I tend to call the factory method a "global singleton". I then call the created objects "local singletons". I sometimes also say that the created objects have "reference equality", meaning that if two variables reference the same data (the same underlying item) then the reference they each hold must be to the same exact object, hence "reference equality". But these are my own invented terms, and I am not sure that they are good ones. Is there standard terminology for this concept? And if not, could some naming suggestions be made? Thanks in advance...

    Read the article

  • Problem setting row backgrounds in Android Listview

    - by zchtodd
    I have an application in which I'd like one row at a time to have a certain color. This seems to work about 95% of the time, but sometimes instead of having just one row with this color, it will allow multiple rows to have the color. Specifically, a row is set to have the "special" color when it is tapped. In rare instances, the last row tapped will retain the color despite a call to setBackgroundColor attempting to make it otherwise. private OnItemClickListener mDirectoryListener = new OnItemClickListener(){ public void onItemClick(AdapterView parent, View view, int pos, long id){ if (stdir.getStationCount() == pos) { stdir.moreStations(); return; } if (playingView != null) playingView.setBackgroundColor(Color.DKGRAY); view.setBackgroundColor(Color.MAGENTA); playingView = view; playStation(pos); } }; I have confirmed with print statements that the code setting the row to gray is always called. Can anyone imagine a reason why this code might intermittently fail? If there is a pattern or condition that causes it, I can't tell. I thought it might have something to do with the activity lifecycle setting the "playingView" variable back to null, but I can't reliably reproduce the problem by switching activities or locking the phone. private class DirectoryAdapter extends ArrayAdapter { private ArrayList<Station> items; public DirectoryAdapter(Context c, int resLayoutId, ArrayList<Station> stations){ super(c, resLayoutId, stations); this.items = stations; } public int getCount(){ return items.size() + 1; } public View getView(int position, View convertView, ViewGroup parent){ View v = convertView; LayoutInflater vi = (LayoutInflater)getContext().getSystemService(Context.LAYOUT_INFLATER_SERVICE); if (position == this.items.size()) { v = vi.inflate(R.layout.morerow, null); return v; } Station station = this.items.get(position); v = vi.inflate(R.layout.songrow, null); if (station.playing) v.setBackgroundColor(Color.MAGENTA); else if (station.visited) v.setBackgroundColor(Color.DKGRAY); else v.setBackgroundColor(Color.BLACK); TextView title = (TextView)v.findViewById(R.id.title); title.setText(station.name); return v; } };

    Read the article

  • ASP.NET Application Level vs. Session Level and Global.asax...confused

    - by contactmatt
    The following text is from the book I'm reading, 'MCTS Self-Paced Training Kit (Exam 70-515) Web Applications Development with ASP.NET 4". It gives the rundown of the Application Life Cycle. A user first makes a request for a page in your site. The request is routed to the processing pipeline, which forwards it to the ASP.NET runtime. The ASP.NET runtime creates an instance of the ApplicationManager class; this class instance represents the .NET framework domain that will be used to execute requests for your application. An application domain isolates global variables from other applications and allows each application to load and unload separately, as required. After the application domain has been created, an instance of the HostingEnvironment class is created. This class provides access to items inside the hosting environment, such as directory folders. ASP.NET creates instances of the core objects that will be used to process the request. This includes HttpContext, HttpRequest, and HttpResponse objects. ASP.NET creates an instance of the HttpApplication class (or an instance is reused). This class is also the base class for a site’s Global.asax file. You can use this class to trap events that happen when your application starts or stops. When ASP.NET creates an instance of HttpApplication, it also creates the modules configured for the application, such as the SessionStateModule. Finally, ASP.NET processes request through the HttpApplication pipleline. This pipeline also includes a set of events for validating requests, mapping URLs, accessing the cache, and more. The book then demonstrated an example of using the Global.asax file: <script runat="server"> void Application_Start(object sender, EventArgs e) { Application["UsersOnline"] = 0; } void Session_Start(object sender, EventArgs e) { Application.Lock(); Application["UsersOnline"] = (int)Application["UsersOnline"] + 1; Application.UnLock(); } void Session_End(object sender, EventArgs e) { Application.Lock(); Application["UsersOnline"] = (int)Application["UsersOnline"] - 1; Application.UnLock(); } </script> When does an application start? Whats the difference between session and application level? I'm rather confused on how this is managed. I thought that Application level classes "sat on top of" an AppDomain object, and the AppDomain contained information specific to that Session for that user. Could someone please explain how IIS manages Applicaiton level classes, and how an HttpApplication class sits under an AppDomain? Anything is appreciated.

    Read the article

  • Perl: Compare and edit underlying structure in hash

    - by Mahfuzur Rahman Pallab
    I have a hash of complex structure and I want to perform a search and replace. The first hash is like the following: $VAR1 = { abc => { 123 => ["xx", "yy", "zy"], 456 => ["ab", "cd", "ef"] }, def => { 659 => ["wx", "yg", "kl"], 456 => ["as", "sd", "df"] }, mno => { 987 => ["lk", "dm", "sd"] }, } and I want to iteratively search for all '123'/'456' elements, and if a match is found, I need to do a comparison of the sublayer, i.e. of ['ab','cd','ef'] and ['as','sd','df'] and in this case, keep only the one with ['ab','cd','ef']. So the output will be as follows: $VAR1 = { abc => { 123 => ["xx", "yy", "zy"], 456 => ["ab", "cd", "ef"] }, def => { 659 => ["wx", "yg", "kl"] }, mno => { 987 => ["lk", "dm", "sd"] }, } So the deletion is based on the substructure, and not index. How can it be done? Thanks for the help!! Lets assume that I will declare the values to be kept, i.e. I will keep 456 = ["ab", "cd", "ef"] based on a predeclared value of ["ab", "cd", "ef"] and delete any other instance of 456 anywhere else. The search has to be for every key. so the code will go through the hash, first taking 123 = ["xx", "yy", "zy"] and compare it against itself throughout the rest of the hash, if no match is found, do nothing. If a match is found, like in the case of 456 = ["ab", "cd", "ef"], it will compare the two, and as I have said that in case of a match the one with ["ab", "cd", "ef"] would be kept, it will keep 456 = ["ab", "cd", "ef"] and discard any other instances of 456 anywhere else in the hash, i.e. it will delete 456 = ["as", "sd", "df"] in this case.

    Read the article

  • Unset/Change Binding in WPF

    - by captcalamares
    How can I unset the binding applied to an object so that I can apply another binding to it from a different location? Suppose I have two data templates binded to the same object reference. Data Template #1 is the default template to be loaded. I try to bind a button command to a Function1 from my DataContext class: <Button Content="Button 1" CommandParameter="{Binding }" Command="{Binding DataContext.Function1, RelativeSource={RelativeSource AncestorType={x:Type Window}}}"/> This actually works and the function gets binded. However, when I try to load Data Template # 2 to the same object (while trying to bind another button command to a different function (Function2) from my DataContext class): <Button Content="Button 2" CommandParameter="{Binding }" Command="{Binding DataContext.Function2, RelativeSource={RelativeSource AncestorType={x:Type Window}}}" /> It doesn't work and the first binding is still the one executed. Is there a workaround to this? EDIT (for better problem context): I defined my templates in my Window.Resources: <Window.Resources> <DataTemplate DataType="{x:Type local:ViewModel1}"> <local:View1 /> </DataTemplate> <DataTemplate DataType="{x:Type local:ViewModel2}"> <local:View2 /> </DataTemplate> </Window.Resources> The View1.xaml and the View2.xaml contain the button definitions that I described above (I want them to command the control of my process flow). ViewModel1 and ViewModel2 are my ViewModels that implement the interface IPageViewModel which is the type of my variable CurrentPageViewModel. In my XAML, I binded ContentControl to the variable CurrentPageViewModel: <ContentControl Content="{Binding CurrentPageViewModel}" HorizontalAlignment="Center"/> In my .CS, I have a list defined as List<IPageViewModel> PageViewModels, which I use to contain the instances of my two View Models: PageViewModels.Add(new ViewModel1()); PageViewModels.Add(new ViewModel2()); // Set starting page CurrentPageViewModel = PageViewModels[0]; When I try to change my CurrentPageViewModel to the other view model, this is when I want the new binding to work. Unfortunately, it doesn't. Am I doing things the right way?

    Read the article

  • FluentNHibernate - AutoMappings producing incorrect one-to-many column key

    - by Alberto
    Hi I'm new to NHibernate and FNH and am trying to map these simple classes by using FluentNHibernate AutoMappings feature: public class TVShow : Entity { public virtual string Title { get; set;} public virtual ICollection<Season> Seasons { get; protected set; } public TVShow() { Seasons = new HashedSet<Season>(); } public virtual void AddSeason(Season season) { season.TVShow = this; Seasons.Add(season); } public virtual void RemoveSeason(Season season) { if (!Seasons.Contains(season)) { throw new InvalidOperationException("This TV Show does not contain the given season"); } season.TVShow = null; Seasons.Remove(season); } } public class Season : Entity { public virtual TVShow TVShow { get; set; } public virtual int Number { get; set; } public virtual IList<Episode> Episodes { get; set; } public Season() { Episodes = new List<Episode>(); } public virtual void AddEpisode(Episode episode) { episode.Season = this; Episodes.Add(episode); } public virtual void RemoveEpisode(Episode episode) { if (!Episodes.Contains(episode)) { throw new InvalidOperationException("Episode not found on this season"); } episode.Season = null; Episodes.Remove(episode); } } I'm also using a couple of conventions: public class MyForeignKeyConvention : IReferenceConvention { #region IConvention<IManyToOneInspector,IManyToOneInstance> Members public void Apply(FluentNHibernate.Conventions.Instances.IManyToOneInstance instance) { instance.Column("fk_" + instance.Property.Name); } #endregion } The problem is that FNH is generating the section below for the Seasons property mapping: <bag name="Seasons"> <key> <column name="TVShow_Id" /> </key> <one-to-many class="TVShowsManager.Domain.Season, TVShowsManager.Domain, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </bag> The column name above should be fk_TVShow rather than TVShow_Id. If amend the hbm files produced by FNH then the code works. Does anyone know what it's wrong? Thanks in advance.

    Read the article

  • What classes should I map against with NHibernate?

    - by apollodude217
    Currently, we use NHibernate to map business objects to database tables. Said business objects enforce business rules: The set accessors will throw an exception on the spot if the contract for that property is violated. Also, the properties enforce relationships with other objects (sometimes bidirectional!). Well, whenever NHibernate loads an object from the database (e.g. when ISession.Get(id) is called), the set accessors of the mapped properties are used to put the data into the object. What's good is that the middle tier of the application enforces business logic. What's bad is that the database does not. Sometimes crap finds its way into the database. If crap is loaded into the application, it bails (throws an exception). Sometimes it clearly should bail because it cannot do anything, but what if it can continue working? E.g., an admin tool that gathers real-time reports runs a high risk of failing unnecessarily instead of allowing an admin to even fix a (potential) problem. I don't have an example on me right now, but in some instances, letting NHibernate use the "front door" properties that also enforce relationships (especially bidi) leads to bugs. What are the best solutions? Currently, I will, on a per-property basis, create a "back door" just for NHibernate: public virtual int Blah {get {return _Blah;} set {/*enforces BR's*/}} protected virtual int _Blah {get {return blah;} set {blah = value;}} private int blah; I showed the above in C# 2 (no default properties) to demonstrate how this gets us basically 3 layers of, or views, to blah!!! While this certainly works, it does not seem ideal as it requires the BL to provide one (public) interface for the app-at-large, and another (protected) interface for the data access layer. There is an additional problem: To my knowledge, NHibernate does not give you a way to distinguish between the name of the property in the BL and the name of the property in the entity model (i.e. the name you use when you query, e.g. via HQL--whenever you give NHibernate the name (string) of a property). This becomes a problem when, at first, the BR's for some property Blah are no problem, so you refer to it in your O/R mapping... but then later, you have to add some BR's that do become a problem, so then you have to change your O/R mapping to use a new _Blah property, which breaks all existing queries using "Blah" (common problem with programming against strings). Has anyone solved these problems?!

    Read the article

  • Java DriverManager Always Assigns My Driver

    - by JGB146
    I am writing a driver to act as a wrapper around two separate MySQL connections (to distributed databases). Basically, the goal is to enable interaction with my driver for all applications instead of requiring the application to sort out which database holds the desired data. Most of the code for this is in place, but I'm having a problem in that when I attempt to create connections via the MySQL Driver, the DriverManager is returning an instance of my driver instead of the MySQL Driver. I'd appreciate any tips on what could be causing this and what could be done to fix it! Below is a few relevant snippets of code. I can provide more, but there's a lot, so I'd need to know what else you want to see. First, from MyDriver.java: public MyDriver() throws SQLException { DriverManager.registerDriver(this); } public Connection connect(String url, Properties info) throws SQLException { try { return new MyConnection(info); } catch (Exception e) { return null; } } public boolean acceptsURL(String url) throws SQLException { if (url.contains("jdbc:jgb://")) { return true; } return false; } It is my understanding that this acceptsURL function will dictate whether or not the DriverManager deems my driver a suitable fit for a given URL. Hence it should only be passing connections from my driver if the URL contains "jdbc:jgb://" right? Here's code from MyConnection.java: Connection c1 = null; Connection c2 = null; /** *Constructors */ public DDBSConnection (Properties info) throws SQLException, Exception { info.list(System.out); //included for testing Class.forName("com.mysql.jdbc.Driver").newInstance(); String url1 = "jdbc:mysql://server1.com/jgb"; String url2 = "jdbc:mysql://server2.com/jgb"; this.c1 = DriverManager.getConnection( url1, info.getProperty("username"), info.getProperty("password")); this.c2 = DriverManager.getConnection( url2, info.getProperty("username"), info.getProperty("password")); } And this tells me two things. First, the info.list() call confirms that the correct user and password are being sent. Second, because we enter an infinite loop, we see that the DriverManager is providing new instances of my connection as matches for the mysql URLs instead of the desired mysql driver/connection. FWIW, I have separately tested implementations that go straight to the mysql driver using this exact syntax (al beit only one at a time), and was able to successfully interact with each database individually from a test application outside of my driver.

    Read the article

  • vector related memory allocation question

    - by memC
    hi all, I am encountering the following bug. I have a class Foo . Instances of this class are stored in a std::vector vec of class B. in class Foo, I am creating an instance of class A by allocating memory using new and deleting that object in ~Foo(). the code compiles, but I get a crash at the runtime. If I disable delete my_a from desstructor of class Foo. The code runs fine (but there is going to be a memory leak). Could someone please explain what is going wrong here and suggest a fix? thank you! class A{ public: A(int val); ~A(){}; int val_a; }; A::A(int val){ val_a = val; }; class Foo { public: Foo(); ~Foo(); void createA(); A* my_a; }; Foo::Foo(){ createA(); }; void Foo::createA(){ my_a = new A(20); }; Foo::~Foo(){ delete my_a; }; class B { public: vector<Foo> vec; void createFoo(); B(){}; ~B(){}; }; void B::createFoo(){ vec.push_back(Foo()); }; int main(){ B b; int i =0; for (i = 0; i < 5; i ++){ std::cout<<"\n creating Foo"; b.createFoo(); std::cout<<"\n Foo created"; } std::cout<<"\nDone with Foo creation"; std::cout << "\nPress RETURN to continue..."; std::cin.get(); return 0; }

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >