Search Results

Search found 19969 results on 799 pages for 'nate bit'.

Page 653/799 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • Core Data: Overkill for simple, static UITableView-based iPhone App?

    - by David Foster
    Hello! I have a rather simple iPhone app consisting of numerous views containing a single, grouped table view. These views are held together in navigation controllers which are grouped in a tab bar. Simple stuff. My table views do little more than list text (like "Dog", "Cat" and "Weasel") and this data is being served from a collection of plists. It's perhaps worth mentioning too that these tables are 'static' in the sense that their data is pre-determined and will only ever be amended—and if so, very rarely indeed—by the developer (in this case, moi). This rudimentary approach has reached its limits though, and I think I'm going to need something a bit more relational. I have worked a tad with Core Data in the past, but only with apps whose data is determined by user input. I have four closely related questions: Is Core Data overkill for an app consisting mainly of a selection of simple table views? Do you recommend using Core Data to manage data which is predetermine and extremely unlikely to ever change? Can one lock Core Data down so that its data can't change, thereby relinquishing my responsibility as the developer to handle the editing and saving of the managed object context? How do I go about giving Core Data my predetermined data, and in a format I know that it can work with? Thanks a bunch guys.

    Read the article

  • jQuery modal dialog on ajaxStart event

    - by bdl
    I'm trying to use a jQuery UI modal dialog as a loading indicator via the ajaxStart, ajaxStop / ajaxComplete events. When the page fires, an Ajax handler loads some data, and the modal dialog shows just fine. However, it never hides or closes the dialog when the Ajax event is complete. It's a very small bit of code from the local server that is returned, so the actual Ajax event is very quick. Here's my actual code for the modal div: $("#modalwindow").dialog({ modal: true, height: 50, width: 200, zIndex: 999, resizable: false, title: "Please wait..." }) .bind("ajaxStart", function(){ $(this).show(); }) .bind("ajaxStop", function(){ $(this).hide(); }); The Ajax event is just a plain vanilla $.ajax({}) GET method call. Based on some searching here and Google, I've tried altering the ajaxStop handler to use $("#modalwindow").close(), $("#modalwindow").destroy(), etc. (#modalwindow referred to here as to give explicit context). I've also tried using the standard $("#modalwindow").dialog({}).ajaxStart(... as well. Should I be binding the events to a different object? Or calling them from within the $.ajax() complete event? I should mention, I'm testing on the latest IE8, FF 3.6 and Chrome. All have the same / effect.

    Read the article

  • I'm having trouble spacing a menu control in an ASP.NET page. Is my solution the correct way to do t

    - by pkiyan
    Hey, I added a menu control to my page that is displayed vertically. I couldn't find a way to add spaces (I'd like about 5px.) between the menu items, so I just did something similar to this: <asp:Menu ID="Menu1" runat="server" BackColor="ActiveBorder"> <Items> <asp:MenuItem NavigateUrl="~/About.aspx" Text="One" /> </Items> </asp:Menu> <p></p> <asp:Menu ID="Menu2" runat="server" BackColor="ActiveBorder"> <Items> <asp:MenuItem NavigateUrl="~/Default.aspx" Text="Two" /> </Items> </asp:Menu> I just created multiple menu controls with a single menu item control in them, and placed a break between the menu controls. This seems very wrong to me, but I could not figure out another way. Also, this is a bit off subject, but is it okay to use empty paragraph tags as line breaks?(sometimes a br tag is too much) Thanks..

    Read the article

  • Analyzing Web Application Speed

    - by Amy
    I'm a bit confused because the logical/programmer brain in me says that if all things are constant, the speed of a function must be constant. I am working on a PHP web application with jqGrid as a front end for showing the data. I am testing on my personal computer, so network traffic does not apply. I make an HTTP request to a PHP function, it returns the data, and then jqGrid renders it. What has me befuddled is that sometimes, Firebug reports that this is taking 300-600 milliseconds sometimes, and sometimes, it's taking 3.68 seconds. I can run the request over and over again, with very radically different response times. The query is the same. The number of users on the system is the same. No network latency. Same code. I'm not running other applications on the computer while testing. I could understand query caching improving performance on subsequent requests, but the speed is just fluctuating wildly with no rhyme or reason. So, my question is, what else can cause such variability in the response time? How can I determine what's doing it? More importantly, is there any way to get things more consistent?

    Read the article

  • Cannot run Python script on Windows with output redirected??

    - by Wai Yip Tung
    This is running on Windows 7 (64 bit), Python 2.6 with Win32 Extensions for Python. I have a simple script that just print "hello world". I can launch it with python hello.py. In this case I can redirect the output to a file. But if I run it by just typing hello.py on the command line and redirect the output, I get an exception. C:> python hello.py hello world C:> python hello.py >output C:> type output hello world C:> hello.py hello world C:> hello.py >output close failed in file object destructor: Error in sys.excepthook: Original exception was: I think I first get this error after upgrading to Windows 7. I remember it should work in XP. I have seen people talking about this bug python-Bugs-1012692 | Can't pipe input to a python program. But that was long time ago. And it does not mention any solution. Have anyone experienced this? Anyone can help?

    Read the article

  • Developing on both Windows & Linux machines simultaneously

    - by Jamie
    Sorry for the bad title (couldn't think of a better way to describe it) I have a windows machine which I do development on. However, I have a new project which needs to interact with a linux system (executing linux commands etc.). So, obviously I can't do development on my windows machine..and I don't wish to code on the dev machine, svn commit and then svn update it on the linux machine. Is there a way where any changes I make on my dev machine will be quickly mirrored to the linux machine? SVN is not a very quick alternative and of course some changes will be very minor. Any ideas? A network share I guess....but that's not very pretty (bit slow too). As fellow developers I would like to know if you've been in a similar situation and how you've resolved it. On a furthernote, I can't just install Ubuntu as my development machine and mirror the commands, applications etc. from the linux machine because it's a cluster 'master' machine and so therefore it has quite a special configuration. Thanks guys! EDIT: I've also thought about having web services on the linux machine and then just calling them from code thus seperating platform development dependency. What do you think about that too? thanks

    Read the article

  • Declaring a string array in class header file - compiler thinks string is variable name?

    - by Dave
    Hey everybody, I need a bit of a hand with declaring a string array in my class header file in C++. atm it looks like this: //Maze.h #include <string> class Maze { GLfloat mazeSize, mazeX, mazeY, mazeZ; string* mazeLayout; public: Maze ( ); void render(); }; and the constructor looks like this: //Maze.cpp #include <GL/gl.h> #include "Maze.h" #include <iostream> #include <fstream> Maze::Maze( ) { cin >> mazeSize; mazeLayout = new string[mazeSize]; mazeX = 2/mazeSize; mazeY = 0.25; mazeZ = 2/mazeSize; } I'm getting a compiler error that says: In file included from model-view.cpp:11: Maze.h:14: error: ISO C++ forbids declaration of ‘string’ with no type Maze.h:14: error: expected ‘;’ before ‘*’ token and the only sense that makes to me is that for some reason it thinks I want string as a variable name not as a type declaration. If anybody could help me out that would be fantastic, been looking this up for a while and its giving me the shits lol. Cheers guys

    Read the article

  • Extending the .NET type system so the compiler enforces semantic meaning of primitive values in cert

    - by Drew Noakes
    I'm working with geometry a bit at the moment and am converting a lot between degrees and radians. Unfortunately, both of these are represented by double, so there's compile time warning/error if I try to pass a value in degrees where radians are expected. I believe F# has a compile-time solution for this (called units of measure.) I'd like to do something similar in C#. As another example, imagine a SQL library that accepts various query parameters as strings. It'd be good to have a way of enforcing that only clean strings were allowed to be passed in at runtime, and the only way to get a clean string was to pass through some SQL injection attack preventing logic. The obvious solution is to wrap the double/string/whatever in a new type to give it the type information the compiler needs. I'm curious if anyone has an alternative solution. If you do think wrapping is the only/best way, then please go into some of the downsides of the pattern (and any upsides I haven't mentioned too.) I'm especially concerned about the performance of abstracted primitive numeric types on my calculations at runtime.

    Read the article

  • Are finalizers ever allowed to call other managed classes' methods?

    - by romkyns
    I used to be pretty sure the answer is "no", as explained in Overriding the Finalize method and Object.Finalize documentation. However, while randomly browsing through FileStream in Reflector, I found that it can actually call just such a method from a finalizer: private SafeFileHandle _handle; ~FileStream() { if (this._handle != null) { this.Dispose(false); } } protected override void Dispose(bool disposing) { try { ... } finally { if ((this._handle != null) && !this._handle.IsClosed) // <=== HERE { this._handle.Dispose(); // <=== AND HERE } [...] } } I started wondering whether this will always work due to the exact way in which it's written, and hence whether the "do not touch managed classes from finalizers" is just a guideline that can be broken given a good reason and the necessary knowledge to do it right. I dug a bit deeper and found out that the worst that can happen when the "rule" is broken is that the managed object being accessed had already been finalized, or may be getting finalized in parallel on a separate thread. So if the SafeFileHandle's finalizer didn't do anything that would cause a subsequent call to Dispose fail then the above should be fine... right? Question: so there might after all be situations in which a method on another managed class may be called reliably from a finalizer? I've always believed this to be false, but this code suggests that it's possible and that there can be good enough reasons to do it. Bonus: Observe that the SafeFileHandle will not even know it's being called from a finalizer, since this is just a normal call to Dispose(). The base class, SafeHandle, actually has two private methods, InternalDispose and InternalFinalize, and in this case InternalDispose will be called. Isn't this a problem? Why not?...

    Read the article

  • how to get jquery.couch.app.js to work with IE8

    - by fuzzy lollipop
    I have tested this on Windows XP SP3 and Windows 7 Ultimate in IE7 and IE8 (in all compatiblity modes) and it fails the same way on both. I am running the latest HEAD from the the couchapp repository. This works fine on my OSX 10.6.3 development machine. I have tested with Chrome 4.1.249.1064 (45376) and Firefox 3.6 and they both work fine. As do the Safari 4 and Firefox 3.6 on OSX 10.6.3 Here is the error message Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0) Timestamp: Wed, 28 Apr 2010 03:32:55 UTC Message: Object doesn't support this property or method Line: 159 Char: 7 Code: 0 URI: http://192.168.0.105:5984/test/_design/test/vendor/couchapp/jquery.couch.app.js and here is the "offending" bit of code, which works on Chrome, Firefox and Safari just fine. If says the failure is on the like that qs.forEach() from the file jquery.couch.app.js 157 var qs = document.location.search.replace(/^\?/,'').split('&'); 158 var q = {}; 159 qs.forEach(function(param) { 160 var ps = param.split('='); 161 var k = decodeURIComponent(ps[0]); 162 var v = decodeURIComponent(ps[1]); 163 if (["startkey", "endkey", "key"].indexOf(k) != -1) { 164 q[k] = JSON.parse(v); 165 } else { 166 q[k] = v; 167 } 168 });

    Read the article

  • Advantages of Using Linux as primary developer desktop

    - by Nick N
    I want to get some input on some of the advantages of why developers should and need to use Linux as their primary development desktop on a daily basic as opposed to using Windows. This is particulary helpful when your Dev, QA, and Production environments are Linux. The current analogy that I keep coming back to is. If I build my demo car as a Ford Escort, but my project car is a Ford Mustang, it doesn't make sense at all. I'm currently at an IT department that allows dual boot with Windows and Linux, but some run Linux while the vast majority use Windows. Here's several advantages that I've came up with since using Linux as a primary desktop. Same Exact operating system as Dev, QA, and Production Same Scripts (.sh) instead of maintaining (.bat and *.sh). Somewhat mitigated by using cygwin, but still a bit different. Team learns simple commands such as: cd, ls, cat, top Team learns Advanced commands like: pkill, pgrep, chmod, su, sudo, ssh, scp Full access to installs typically for Linux, such as RPM, DEB installs just like the target environments. The list could go on and on, but I want to get some feedback of anything that I may have missed, or even any disadvantages (of course there are some). To me it makes sense to migrate an entire team over to using Linux, and using Virtual Box, running Windows XP VM's to test functional items that 95% of most of the world uses. This is similar but a little different thread going on here as well. link text

    Read the article

  • Technical choices in unmarshaling hash-consed data

    - by Pascal Cuoq
    There seems to be quite a bit of folklore knowledge floating about in restricted circles about the pitfalls of hash-consing combined with marshaling-unmarshaling of data. I am looking for citable references to these tidbits. For instance, someone once pointed me to library aterm and mentioned that the authors had clearly thought about this and that the representation on disk was bottom-up (children of a node come before the node itself in the data stream). This is indeed the right way to do things when you need to re-share each node (with a possible identical node already in memory). This re-sharing pass needs to be done bottom-up, so the unmarshaling itself might as well be, too, so that it's possible to do everything in a single pass. I am in the process of describing difficulties encountered in our own context, and the solutions we found. I would appreciate any citable reference to the kind of aforementioned folklore knowledge. Some people obviously have encountered the problems before (the aterm library is only one example). But I didn't find anything in writing. Even the little piece of information I have about aterm is hear-say. I am not worried it's not reliable (you can't make this up), but "personal communication" and "look how it's done in the source code" are considered poor form in citations. I have enough references on hash-consing alone. I am only interested in references where it interferes with other aspects of programming, such as marshaling or distribution.

    Read the article

  • What is the way to go to fake my database layer in a unit test?

    - by Michel
    Hi, i have a question about unit testing. say i have a controller with one create method which puts a new customer in the database: //code a bit shortened public actionresult Create(Formcollection formcollection){ client c = nwe client(); c.Name = formcollection["name"]; ClientService.Save(c); { Clientservice would call a datalayer object and save it in the database. What i do now is create a database testscript and set my database in a know condition before testing. So when i test this method in the unit test, i know that there must be one more client in the database, and what it's name is. In short: ClientController cc = new ClientController(); cc.Create(new FormCollection (){name="John"}); //i know i had 10 clients before assert.areEqual(11, ClientService.GetNumberOfClients()); //the last inserted one is John assert.areEqual("John", ClientService.GetAllClients()[10].Name); So i've read that unit testing should not be hitting the database, i've setup an IOC for the database classes, but then what? I can create a fake database class, and make it do nothing. But then ofcourse my assertions will not work because if i say GetNumberOfClients() it will alwasy return X because it has no interaction with the fake database class used in the Create Method. I can also create a List of Clients in the fake database class, but as there will be two different instance created (one in the controller action and one in the unit test), they will have no interaction. What is the way to make this unit test work without a database?

    Read the article

  • URL shortening: using inode as short name?

    - by Licky Lindsay
    The site I am working on wants to generate its own shortened URLs rather than rely on a third party like tinyurl or bit.ly. Obviously I could keep a running count new URLs as they are added to the site and use that to generate the short URLs. But I am trying to avoid that if possible since it seems like a lot of work just to make this one thing work. As the things that need short URLs are all real physical files on the webserver my current solution is to use their inode numbers as those are already generated for me ready to use and guaranteed to be unique. function short_name($file) { $ino = @fileinode($file); $s = base_convert($ino, 10, 36); return $s; } This seems to work. Question is, what can I do to make the short URL even shorter? On the system where this is being used, the inodes for newly added files are in a range that makes the function above return a string 7 characters long. Can I safely throw away some (half?) of the bits of the inode? And if so, should it be the high bits or the low bits? I thought of using the crc32 of the filename, but that actually makes my short names longer than using the inode. Would something like this have any risk of collisions? I've been able to get down to single digits by picking the right value of "$referencefile". function short_name($file) { $ino = @fileinode($file); // arbitrarily selected pre-existing file, // as all newer files will have higher inodes $ino = $ino - @fileinode($referencefile); $s = base_convert($ino, 10, 36); return $s; }

    Read the article

  • MySQL Normalization stored procedure performance

    - by srkiNZ84
    Hi, I've written a stored procedure in MySQL to take values currently in a table and to "Normalize" them. This means that for each value passed to the stored procedure, it checks whether the value is already in the table. If it is, then it stores the id of that row in a variable. If the value is not in the table, it stores the newly inserted value's id. The stored procedure then takes the id's and inserts them into a table which is equivalent to the original de-normailized table, but this table is fully normalized and consists of mainly foreign keys. My problem with this design is that the stored procedure takes approximately 10ms or so to return, which is too long when you're trying to work through some 10million records. My suspicion is that the performance is to do with the way in which I'm doing the inserts. i.e. INSERT INTO TableA (first_value) VALUES (argument_from_sp) ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id); SET @TableAId = LAST_INSERT_ID(); The "ON DUPLICATE KEY UPDATE" is a bit of a hack, due to the fact that on a duplicate key I don't want to update anything but rather just return the id value of the row. If you miss this step though, the LAST_INSERT_ID() function returns the wrong value when you're trying to run the "SET ..." statement. Does anyone know of a better way to do this in MySQL? Thank you

    Read the article

  • Correct Time Display

    - by Matthew
    Guys, I''m looking to get this correct and i'm getting a bit fustrated with this. What I want to do is get hours and days and weeks correct. Example: if this post is < 60min old then have it read: Posted Less then 1 minute ago if this post is < 120min old then have it read: Posted 1 hour ago if this post is 120min old then have it read: Posted 1 hours ago if this post is < 1440min old then have it read: Posted 1 day ago if this post is 1440min old then have it read: Posted 2 days ago Is that right?? This is what I have so far: if (lapsedTime < 60) { return '< 1 mimute'; } else if (lapsedTime < (60*60)) { return Math.round(lapsedTime / 60) + 'minutes'; } else if (lapsedTime < (12*60*60)) { return Math.round(lapsedTime / 2400) + 'hr'; } else if (lapsedTime < (24*60*60)) { return Math.round(lapsedTime / 3600) + 'hrs'; } else if (lapsedTime < (7*24*60*60)) { return Math.round(lapsedTime / 86400) + 'days'; } else { return Math.round(lapsedTime / 604800) + 'weeks'; }

    Read the article

  • WPF-Can a XAML object be a source as well as a target for bindings?

    - by iambic77
    I was wondering if it's possible to have a TextBlock as a target and a source? Basically I have a bunch of entities which have simple relationships to other entities (like Entity1 Knows Entity3, Entity3 WorksAt Entity2 etc.) I have a Link class that stores SourceEntity, Relationship and TargetEntity details. What I want to be able to do is to select an entity then display the relationships related to that entity, with the target entities of each relationship listed underneath the relationship names. When an entity is selected, an ObservableCollection is populated with the Links for that particular entity (SelectedEntityLinks<Link>). Because each entity could have the same relationship to more than one target entity (Entity1 could know both Entity3 and Entity4 for eg.), I've created a method GetThisRelationshipEntities() that takes a relationship name as a parameter, looks through SelectedEntityLinks for relationship names that match the parameter, and returns an ObservableCollection with the target entities of that relationship. Hope I'm making this clear. In my xaml I have a WrapPanel to display each relationship name in a TextBlock: <TextBlock x:Name="relationship" Text="{Binding Path=Relationship.Name}" /> Then underneath that another Textblock which should display the results of GetThisRelationshipEntities(String relationshipName). So I want the "relationship" TextBlock to both get its Text from the binding I've shown above, but also to provide its Text as a parameter to the GetThisRelationshipEntities() method which I've added to <UserControl.Resources> as an ObjectDataProvider. Sorry if this is a bit wordy but I hope it's clear. Any pointers/advice would be great. Many thanks.

    Read the article

  • How to fetch and populate backbone model for Google Places JS API?

    - by code-gijoe
    I'm implementing a system that require access to Google Places JS API. I've been using rails for most of the project, but now I want to inject a bit of AJAX in one of my views. Basically it is a view that displays places near your location. For this, I'm using the JS API of Google places. A quick workflow would be: 1- The user inputs a text query and hits enter. 2- There is an AJAX call to request data from Google Places API. 3- The successful result is presented to the user. The problem is primarily in step 2. I want to use backbone for this but when I create a backbone model, it requests to the 'rootURL'. This wouldn't be a problem if the requests to Places was done from the server but it is not. A place call is done like this: service = new google.maps.places.PlacesService(map); service.nearbySearch(request, callback); Passing a callback function: function callback(results, status) { if (status == google.maps.places.PlacesServiceStatus.OK) { for (var i = 0; i < results.length; i++) { var place = results[i]; createMarker(results[i]); } } } Is it possible to override the 'fetch' method in backbone model and populate the model with the successful Places result? Is this a bad idea?

    Read the article

  • Head Rotation in Opposite Direction with GLM and Oculus Rift SDK

    - by user3434662
    I am 90% there in getting orientation to work. I am just trying to resolve one last bit and I hope someone can point me to any easy errors I am making but not seeing. My code works except when a person looks left the camera actually rotates right. Vice versa with looking right, camera rotates left. Any idea what I am doing wrong here? I retrieve the orientation from the Oculus Rift like so: OVR::Quatf OculusRiftOrientation = PredictedPose.Orientation; glm::vec3 CurrentEulerAngles; glm::quat CurrentOrientation; OculusRiftOrientation.GetEulerAngles<OVR::Axis_X, OVR::Axis_Y, OVR::Axis_Z, OVR::Rotate_CW, OVR::Handed_R> (&CurrentEulerAngles.x, &CurrentEulerAngles.y, &CurrentEulerAngles.z); CurrentOrientation = glm::quat(CurrentEulerAngles); And here is how I calculate the LookAt: /* DirectionOfWhereCameraIsFacing is calculated from mouse and standing position so that you are not constantly rotating after you move your head. */ glm::vec3 DirectionOfWhereCameraIsFacing; glm::vec3 RiftDirectionOfWhereCameraIsFacing; glm::vec3 RiftCenterOfWhatIsBeingLookedAt; glm::vec3 PositionOfEyesOfPerson; glm::vec3 CenterOfWhatIsBeingLookedAt; glm::vec3 CameraPositionDelta; RiftDirectionOfWhereCameraIsFacing = DirectionOfWhereCameraIsFacing; RiftDirectionOfWhereCameraIsFacing = glm::rotate(CurrentOrientation, DirectionOfWhereCameraIsFacing); PositionOfEyesOfPerson += CameraPositionDelta; CenterOfWhatIsBeingLookedAt = PositionOfEyesOfPerson + DirectionOfWhereCameraIsFacing * 1.0f; RiftCenterOfWhatIsBeingLookedAt = PositionOfEyesOfPerson + RiftDirectionOfWhereCameraIsFacing * 1.0f; RiftView = glm::lookAt(PositionOfEyesOfPerson, RiftCenterOfWhatIsBeingLookedAt, DirectionOfUpForPerson);

    Read the article

  • How do I know which include path will be used in PHP?

    - by Joe Majewski
    When I run phpinfo() and look by the Configuration category under PHP Core, I see a directive titled include_path, with a local value and a master value. In this case, my local value is set to .: ./include: ../include: /usr/share/php: /usr/share/php/smarty: /usr/share/pear and my master value is set to .: /usr/share/php: /usr/share/pear: /usr/share/php/pear: /usr/share/php/smarty The reason I am trying to learn how this works is because there is a file in the system I am working on titled Smarty.class.php, which I'm sure sounds very familiar to anyone who uses Smarty Templating Engine. One of the PHP files has the following includes: require_once("Smarty.class.php"); require_once("user_info_class.inc"); The file user_info_class.inc is in the same directory as the file making the include, which makes perfect sense to me, and is the way that I've always referenced files. I decided that I wanted to open up the Smarty.class.php file and had assumed it would be in the same directory, but it was not. After doing a bit of digging, I discovered those php_ini variables, and was finally able to locate the file in the directory usr/share/php/smarty/. So it would seem that when making an include, it follows some sort of order between the Local and Master values for the include_path. Assuming that my deductions were correct thus far, can someone explain the order in which PHP searches for the files to be included?

    Read the article

  • Encapsulate update method inside of object or have method which accepts an object to update

    - by Tom
    Hi, I actually have 2 questions related to each other: I have an object (class) called, say MyClass which holds data from my database. Currently I have a list of these objects ( List < MyClass ) that resides in a singleton in a "communal area". I feel it's easier to manage the data this way and I fail to see how passing a class around from object to object is beneficial over a singleton (I would be happy if someone can tell me why). Anyway, the data may change in the database from outside my program and so I have to update the data every so often. To update the list of the MyClass I have a method called say, Update, written in another class which accepts a list of MyClass. This updates all the instances of MyClass in the list. However would it be better instead to encapulate the Update() method inside the MyClass object, so instead I would say foreach(MyClass obj in MyClassList) { obj.update(); } What is a better implementation and why? The update method requires a XML reader. I have written an XML reader class which is basically a wrapper over the standard XML reader the language natively provides which provides application specific data collection. Should the XML reader class be in anyway in the "inheritance path" of the MyClass object - the MyClass objects inherits from the XML reader because it uses a few methods. I can't see why it should. I don't like the idea of declaring an instance of the XML Reader class inside of MyClass and an MyClass object is meant to be a simple "record" from the database and I feel giving it loads of methods, other object instances is a bit messy. Perhaps my XML reader class should be static but C#'s native XMLReader isn't static.? Any comments would be greatly appreciated Thanks Thomas

    Read the article

  • Should I use two queries, or is there a way to JOIN this in MySQL/PHP?

    - by Jack W-H
    Morning y'all! Basically, I'm using a table to store my main data - called 'Code' - a table called 'Tags' to store the tags for each code entry, and a table called 'code_tags' to intersect it. There's also a table called 'users' which stores information about the users who submitted each bit of code. On my homepage, I want 5 results returned from the database. Each returned result needs to list the code's title, summary, and then fetch the author's firstname based on the ID of the person who submitted it. I've managed to achieve this much so far (woot!). My problem lies when I try to collect all the tags as well. At the moment this is a pretty big query and it's scaring me a little. Here's my problematic query: SELECT code.*, code_tags.*, tags.*, users.firstname AS authorname, users.id AS authorid FROM code, code_tags, tags, users WHERE users.id = code.author AND code_tags.code_id = code.id AND tags.id = code_tags.tag_id ORDER BY date DESC LIMIT 0, 5 What it returns is correct looking data, but several repeated rows for each tag. So for example if a Code entry has 3 tags, it will return an identical row 3 times - except in each of the three returned rows, the tag changes. Does that make sense? How would I go about changing this? Thanks! Jack

    Read the article

  • Auto-rotating freshly created interface

    - by zoul
    Hello! I have trouble with auto-rotating interfaces in my iPad app. I have a class called Switcher that observes the interface rotation notifications and when it receives one, it switches the view in window, a bit like this: - (void) orientationChanged: (NSNotification*) notice { UIDeviceOrientation newIO = [[UIDevice currentDevice] orientation]; UIViewController *newCtrl = /* something based on newIO */; [currentController.view removeFromSuperview]; // remove the old view [window addSubview newCtrl.view]; [self setCurrentController:newCtrl]; } The problem is that the new view does not auto-rotate. My auto-rotation callback in the controller class looks like this: - (BOOL) shouldAutorotateToInterfaceOrientation: (UIInterfaceOrientation) io { NSString *modes[] = {@"unknown", @"portrait", @"portrait down", @"landscape left", @"landscape right"}; NSLog(@"shouldAutorotateToInterfaceOrientation: %i (%@)", io, modes[io]); return YES; } But no matter how I rotate the device, I find the following in the log: shouldAutorotateToInterfaceOrientation: 1 (portrait) shouldAutorotateToInterfaceOrientation: 1 (portrait) …and the willRotateToInterfaceOrientation:duration: does not get called at all. Now what? The orientation changing is becoming my least favourite part of the iPhone SDK… (I can’t check the code on the device yet, could it be a bug in the simulator?) PS. The subscription code looks like this: [[UIDevice currentDevice] beginGeneratingDeviceOrientationNotifications]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(orientationChanged:) name:UIDeviceOrientationDidChangeNotification object:nil];

    Read the article

  • Cairo / GTK example code crashes when window is too big or maximized

    - by user1890673
    I have copied and compiled the source code available in the section titled "Full Source". http://cairographics.org/threaded_animation_with_cairo/ I adapted this code to a project that I'm working on only to find that the app would crash when I made the window too big. Going back to the original example code, it too crashes when the window is too big ( 1000x1000 or so). I narrowed down in the example that this line appears to be responsible: pixmap = gdk_pixmap_new(window-window,500,500,-1); Where pixmap is of type GdkPixmap*. Resizing the window overwrites pixmap with a new pixmap that is the size of the window. I am doing this in Eclipse Juno in Windows Vista, 32-bit. My compiler is MinGW version 0.5-beta-20120426-1. My GTK+ version is 2.24.10 and apparently Cairo is 1.10.2 I added all of the includes and libraries for GTK and also added compiler switch -mms-bitfields. Is there a limit to the size of a pixmap or something? I'm just starting with GTK with examples so I'm not sure where to go if this example doesn't work.

    Read the article

  • Which source control paradigm and solution to embed in a custom editor application?

    - by Greg Harman
    I am building an application that manages a number of custom objects, which may be edited concurrently by multiple users (using different instances of the application). These objects have an underlying serialized representation, and my plan is to persist them (through my application UI) in an external source control system. Of course this implies that my application can check the current version of an object for updates, a merging interface for each object, etc. My question is what source control paradigm(s) and specific solution(s) to support and why. The way I (perhaps naively) see the source control world is three general paradigms: Single-repository, locked access (MS SourceSafe) Single-repository, concurrent access (CVS/SVN) Distributed (Mercurial, Git) I haven't heard of anyone using #1 for quite a number of years, so I am planning to disregard this case altogether (unless I get a compelling argument otherwise). However, I'm at a loss as to whether to support #2 or #3, and which specific implementations. I'm concerned that the use paradigms are subtly different enough that I can't adequately capture basic operations in a single UI. The last bit of information I should convey is that this application is intended to be deployed in a commercial setting, where a source control system may already be in use. I would prefer not to support more than one solution unless it's really a deal-breaker, so wide adoption in a corporate setting is a plus.

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >