Search Results

Search found 74171 results on 2967 pages for 'data control'.

Page 159/2967 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Dependency injection and IOC containers in a closed project

    - by Puckl
    Does it make sense to assemble my project with dependency injection containers if I am the only one who will use the code of that project? The question came up when I read this IOC Article http://martinfowler.com/articles/injection.html The justification for using dependency injection in this article is that friends can reuse a class, and replace depending classes with their own classes because they get injected and not instantiated in the class. I would only use it to inject objects where they are needed instead of passing them through layers to their target. (Which is not so bad I learned here: Is it bad practice to pass instances through several layers?) (Maybe I will reuse parts of the project, who knows, but I don´t know if that is a good justification)

    Read the article

  • RPG Monster-Area, Spawn, Loot table Design

    - by daemonfire300
    I currently struggle with creating the database structure for my RPG. I got so far: tables: area (id) monster (id, area.id, monster.id, hp, attack, defense, name) item (id, some other values) loot (id = monster.id, item = item.id, chance) spawn (id = area.id, monster = monster.id, count) It is a browser-based game like e.g. Castle Age. The player can move from area to area. If a player enters an area the system spawns, based on the area.id and using the spawn table data, new monsters into the monster table. If a player kills a monster, the system picks the monster.id looks up the items via the the loot table and adds those items to the player's inventory. First, is this smart? Second, I need some kind of "monster_instance"-table and "area_instance"-table, since each player enters his very own "area" and does damage to his very own "monsters". Another approach would be adding the / a player.id to the monster table, so each monster spawned, has it's own "player", but I still need to assign them to an area, and I think this would overload the monster table if I put in the player.id and the area.id into the monster table. What are your thoughts? Temporary Solution monster (id, attackDamage, defense, hp, exp, etc.) monster_instance (id, player.id, area_instance.id, hp, attackDamage, defense, monster.id, etc.) area (id, name, area.id access, restriction) area_instance (id, area.id, last_visited) spawn (id, area.id, monster.id) loot (id, monster.id, chance, amount, ?area.id?) An example system-flow would be: Player enters area 1: system creates area_instance of type area.id = 1 and sets player.location to area.id = 1 If Player wants to battle monsters in the current area: system fetches all spawn entries matching area.id == player.location and creates a new monster_instance for each spawn by fetching the according monster-base data from table monster. If a monster is fetched more than once it may be cached. If Player actually attacks a monster: system updates the according monster_instance, if monster dies the instance if removed after creating the loot If Player leaves the area: area_instance.last_visited is set to NOW(), if player doesn't return to data area within a certain amount of time area_instance including all its monster_instances are deleted.

    Read the article

  • Why is a linked list implementation considered linear?

    - by VeeKay
    My apologies for asking such a simple question. Instead of posting such basic question in SO, I felt that this is more apt a question here. I tried finding an answer for this but none of them are logically appealing or convincing to my understanding. Typically, computer memory is always linear. So is the term non linear used for a data structure in a logical sense? If so, to logically achieve non linearity in a linear computer memory, we use pointers. Right? In that case, if pointers are virtual implementations for achieving non linearity, Why would a data structure like linked list be considered linear if in reality the nodes are never physically adjacent?

    Read the article

  • Extracting GPS Data from JPG files

    - by Peter W. DeBetta
    I have been very remiss in posting lately. Unfortunately, much of what I do now involves client work that I cannot post. Fortunately, someone asked me how he could get a formatted list (e.g. tab-delimited) of files with GPS data from those files. He also added the constraint that this could not be a new piece of software (company security) and had to be scriptable. I did some searching around, and found some techniques for extracting GPS data, but was unable to find a complete solution. So, I did...(read more)

    Read the article

  • Drawing different per-pixel data on the screen

    - by Amir Eldor
    I want to draw different per-pixel data on the screen, where each pixel has a specific value according to my needs. An example may be a random noise pattern where each pixel is randomly generated. I'm not sure what is the correct and fastest way to do this. Locking a texture/surface and manipulating the raw pixel data? How is this done in modern graphics programming? I'm currently trying to do this in Pygame but realized I will face the same problem if I go for C/SDL or OpenGL/DirectX.

    Read the article

  • Library and several small programs that use it: how should I structure my git repository?

    - by Dan
    I have some code that uses a library that I and others frequently modify (usually only by adding functions and methods). We each keep a local fork of the library for our own use. I also have a lot of small "driver" programs (~100 lines) that use the library and are used exclusively by me. Currently, I have both the driver programs and the library in the same repository, because I frequently make changes to both that are logically connected (adding a function to the library and then calling it). I'd like to merge my fork of the library with my co-workers' forks, but I don't want the driver programs to be part of the merged library. What's the best way to organize the git repositories for a large, shared library that needs to be merged frequently and a number of small programs that have changes that are connected to changes in the library?

    Read the article

  • Migrating Core Data to new UIManagedDocument in iOS 5

    - by samerpaul
    I have an app that has been on the store since iOS 3.1, so there is a large install base out there that still uses Core Data loaded up in my AppDelegate. In the most recent set of updates, I raised the minimum version to 4.3 but still kept the same way of loading the data. Recently, I decided it's time to make the minimum version 5.1 (especially with 6 around the corner), so I wanted to start using the new fancy UIManagedDocument way of using Core Data. The issue with this though is that the old database file is still sitting in the iOS app, so there is no migrating to the new document. You have to basically subclass UIManagedDocument with a new model class, and override a couple of methods to do it for you. Here's a tutorial on what I did for my app TimeTag.  Step One: Add a new class file in Xcode and subclass "UIManagedDocument" Go ahead and also add a method to get the managedObjectModel out of this class. It should look like:   @interface TimeTagModel : UIManagedDocument   - (NSManagedObjectModel *)managedObjectModel;   @end   Step two: Writing the methods in the implementation file (.m) I first added a shortcut method for the applicationsDocumentDirectory, which returns the URL of the app directory.  - (NSURL *)applicationDocumentsDirectory {     return [[[NSFileManagerdefaultManager] URLsForDirectory:NSDocumentDirectoryinDomains:NSUserDomainMask] lastObject]; }   The next step was to pull the managedObjectModel file itself (.momd file). In my project, it's called "minimalTime". - (NSManagedObjectModel *)managedObjectModel {     NSString *path = [[NSBundlemainBundle] pathForResource:@"minimalTime"ofType:@"momd"];     NSURL *momURL = [NSURL fileURLWithPath:path];     NSManagedObjectModel *managedObjectModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:momURL];          return managedObjectModel; }   After that, I need to check for a legacy installation and migrate it to the new UIManagedDocument file instead. This is the overridden method: - (BOOL)configurePersistentStoreCoordinatorForURL:(NSURL *)storeURL ofType:(NSString *)fileType modelConfiguration:(NSString *)configuration storeOptions:(NSDictionary *)storeOptions error:(NSError **)error {     // If legacy store exists, copy it to the new location     NSURL *legacyPersistentStoreURL = [[self applicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTime.sqlite"];          NSFileManager* fileManager = [NSFileManagerdefaultManager];     if ([fileManager fileExistsAtPath:legacyPersistentStoreURL.path])     {         NSLog(@"Old db exists");         NSError* thisError = nil;         [fileManager replaceItemAtURL:storeURL withItemAtURL:legacyPersistentStoreURL backupItemName:niloptions:NSFileManagerItemReplacementUsingNewMetadataOnlyresultingItemURL:nilerror:&thisError];     }          return [superconfigurePersistentStoreCoordinatorForURL:storeURL ofType:fileType modelConfiguration:configuration storeOptions:storeOptions error:error]; }   Basically what's happening above is that it checks for the minimalTime.sqlite file inside the app's bundle on the iOS device.  If the file exists, it tells you inside the console, and then tells the fileManager to replace the storeURL (inside the method parameter) with the legacy URL. This basically gives your app access to all the existing data the user has generated (otherwise they would load into a blank app, which would be disastrous). It returns a YES if successful (by calling it's [super] method). Final step: Actually load this database Due to how my app works, I actually have to load the database at launch (instead of shortly after, which would be ideal). I call a method called loadDatabase, which looks like this: -(void)loadDatabase {     static dispatch_once_t onceToken;          // Only do this once!     dispatch_once(&onceToken, ^{         // Get the URL         // The minimalTimeDB name is just something I call it         NSURL *url = [[selfapplicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTimeDB"];         // Init the TimeTagModel (our custom class we wrote above) with the URL         self.timeTagDB = [[TimeTagModel alloc] initWithFileURL:url];           // Setup the undo manager if it's nil         if (self.timeTagDB.undoManager == nil){             NSUndoManager *undoManager = [[NSUndoManager  alloc] init];             [self.timeTagDB setUndoManager:undoManager];         }                  // You have to actually check to see if it exists already (for some reason you can't just call "open it, and if it's not there, create it")         if ([[NSFileManagerdefaultManager] fileExistsAtPath:[url path]]) {             // If it does exist, try to open it, and if it doesn't open, let the user (or at least you) know!             [self.timeTagDB openWithCompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Opened the file--it already existed");                     [self refreshData];                 }             }];         }         else {             // If it doesn't exist, you need to attempt to create it             [self.timeTagDBsaveToURL:url forSaveOperation:UIDocumentSaveForCreatingcompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Created the file--it did not exist");                     [self refreshData];                 }             }];         }     }); }   If you're curious what refreshData looks like, it sends out a NSNotification that the database has been loaded: -(void)refreshData {     NSNotification* refreshNotification = [NSNotificationnotificationWithName:kNotificationCenterRefreshAllDatabaseData object:self.timeTagDB.managedObjectContext  userInfo:nil];     [[NSNotificationCenter defaultCenter] postNotification:refreshNotification];     }   The kNotificationCenterRefreshAllDatabaseData is just a constant I have defined elsewhere that keeps track of all the NSNotification names I use. I pass the managedObjectContext of the newly created file so that my view controllers can have access to it, and start passing it around to one another. The reason we do this as a Notification is because this is being run in the background, so we can't know exactly when it finishes. Make sure you design your app for this! Have some kind of loading indicator, or make sure your user can't attempt to create a record before the database actually exists, because it will crash the app.

    Read the article

  • Getting started with Team Foundation Server

    - by joe
    At work, we recently started using Team Foundation Server to manage our source code, i have no idea how to use this system. I do not know even know how to check source code in and out. Does anyone know of a step-by-step tutorial on how to work with TFS? Just for basic operations e.g. get latest version, upload your changes, etc. I am accessing it from Visual Studio 2010. I also have access to the TFS web interface.

    Read the article

  • Keeping files that are often changed in sync between desktop and laptop

    - by N.N.
    I'm looking for a way to keep a desktop and a laptop in sync. What I want to keep in sync are some folders, mainly ~/Documents, that are changed often when working on them. If it matters I can connect to my desktop from anywhere via an URL but my laptop is harder to access since it might be behind NAT and such. I have been looking at Ubuntu One but it seems to not go well with working on documents written in LaTeX. If I work on a .tex file in the Ubuntu One directory and compile it (with pdflatex) every now and then (as often as every 10 sec when working) it will create several new files including a pdf which are uploaded to Ubuntu One and this seems stupid since it will create continuous upload when working on .tex files. I also usually keep .tex documents version controlled by git and then every commit (which also can happen frequently) will cause upload (by changes in ./.git) so that it happens continuously when working. Another example is editing images that are saved often. What I think would be best is for sync to happen every tenth minute or at the end of every working session (but there might be some other way to handle this?).

    Read the article

  • Using Mercurial repository inside a Git one: Feasible? Sane?

    - by Portablejim
    I am thinking on creating a Mercurial repository under a Git repository. e.g. ..../git-repository/directory/hg-repo/ The 2 repositories Is it possible to manage (keeping your sanity)? How similiar is it to this? I am a computer science student at University. I manage my work in Git, mainly as a distribution mechanism (after realizing that rsync fails when you have changes in more than one place) between my desktop and usb drive. I try use of Git as a VCS as I do work. I have finished a semester where I did a small group project to prepare for a larger group project next year. We had to use Subversion, and experienced the joys of a centralised VCS (including downtime). I tried to keep the subversion repository separate to my Git repository for the subject**, however it was annoying that it was seperate (not in the place where I store assignments). I therefore moved to using an Subversion repository inside my Git repository. As I think ahead (maybe I am thinking too far ahead) I realise that I will have to try and convince people to use a DVCS and Mercurial will probably be the one that is preferred (Windows and Mac GUI support, closer to Subversion). Having done some research into the whole Git vs Mercurial debate (however not used Mercurial at all) I still prefer Git. Can I have a Mercurial repository inside a Git one without going mad (or it ruining something)? Or is it something that I should not consider at all? (Or is it a bad question that should be deleted?) ** I think outside of Australia it is called a course

    Read the article

  • Data Transformation Pipeline

    - by davenewza
    I have create some kind of data pipeline to transform coordinate data into more useful information. Here is the shell of pipeline: public class PositionPipeline { protected List<IPipelineComponent> components; public PositionPipeline() { components = new List<IPipelineComponent>(); } public PositionPipelineEntity Process(Position position) { foreach (var component in components) { position = component.Execute(position); } return position; } public PositionPipeline RegisterComponent(IPipelineComponent component) { components.Add(component); return this; } } Every IPipelineComponent accepts and returns the same type - a PositionPipelineEntity. Code: public interface IPipelineComponent { PositionPipelineEntity Execute(PositionPipelineEntity position); } The PositionPipelineEntity needs to have many properties, many which are unused in certain components and required in others. Some properties will also have become redundant at the end of the pipeline. For example, these components could be executed: TransformCoordinatesComponent: Parse the raw coordinate data into a Coordinate type. DetermineCountryComponent: Determine and stores country code. DetermineOnRoadComponent: Determine and store whether coordinate is on a road. Code: pipeline .RegisterComponent(new TransformCoordinatesComponent()) .RegisterComponent(new DetermineCountryComponent()) .RegisterComponent(new DetermineOnRoadComponent()); pipeline.Process(positionPipelineEntity); The PositionPipelineEntity type: public class PositionPipelineEntity { // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLatitude { get; set; } // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLongitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLatitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLongitude { get; set; } // Set in DetermineCountryComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public string CountryCode { get; set; } // Set in DetermineOnRoadComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public bool OnRoad { get; set; } } Problems: I'm very concerned about the dependency that a component has on properties. The way to solve this would be to create specific types for each component. The problem then is that I cannot chain them together like this. The other problem is the order of components in the pipeline matters. There is some dependency. The current structure does not provide any static or runtime checking for such a thing. Any feedback would be appreciated.

    Read the article

  • What are Collaboration Data Objects (CDO)?

    - by Pranav
    Collaboration Data Objects or CDO, is a component that enables messaging between applications. It's something like the MFC we have in VC++ that enables us to prefer a simpler interface compared to the WIN32 API which, as an interface, still requires lots of escalation work by developers (yet very robust!). CDO is primarily built to simply the creations of messaging applications and we should keep in mind that CDO is NOT a new messaging model but is BUILT ON the MAPI architecture. It is just an extended interface that collaborates with MAPI and simplifies the programming task at hand for creation of messaging applications. CDO replaced Microsoft's earlier Active Messaging. CDO 1.2 enables us to play around with Data, send, receive emails and a host of other functions like rendering in exchange functionalities into HTML and do loads of other stuff. If you've got some firsthand experiences, a couple of tips will be great and will defiantly further my knowledge base in this area and hopefully get me a more refined understanding. Some pointers on MAPI will be pretty cool.

    Read the article

  • Are Remote commit hooks in subversion possible?

    - by John Hamelink
    Hi there, my current setup is as follows: We have a Linux samba share that contains all the repository folders (with the hooks folder inside, amongst the others) All the developers have the share mapped as a network drive, and import to a local directory (normally C:\Server\RepositoryName) where they work on their files. All the machines accessing the drive (unfortunately) run windows. What I'm aiming to do is to have a hook on the Linux server that detects when a commit has been made, by which project, the revision number, the name of the developer who committed, etc. I looked into the hooks files, but they seem to be ran by the client. Is there a way to monitor svn changes and collect the relevant information from the Linux server?

    Read the article

  • Best practices for including open source code from other public projects?

    - by Bryan Kemp
    If I use an existing open source project that is hosted for example on github within one of my projects, should I check in the code from the other project into my public repo or not? I have mixed feelings about this, #1 I want to give proper credit and attribution to the original developer, and if appropriate I will contribute back any changes I need to make. However given that I have developed / tested against a specific revision of the other projects code, that is the version that I want to distribute to users of my project. Here is the specific use case to illustrate my point. I am looking for a more generalized answer than this specific case. I am developing simple framework using rabbitmq and python for outbound messages that will allow for sending sms, twitter, email, and is extensible to support additional messaging buses as well. There is a project on github that will make the creation and sending of SMS messages developed by another person. When I create my own repo how do I account for the code that I am including from the other project?

    Read the article

  • Which prediction model for web page recommendation?

    - by Nilesh
    I am trying to implement a web page recommendation wherein registered users will be given a recommendation of which page to visit depending upon the previous data.So with initial study I decided to go on with clustering the data with rough sets and then will move forward to find out the sequential patters with the use of prefix span algorithm.So now I want to have a better prediction model in place which can predict the access frequency of pages.I have figured out with Markov model but still some more suggestions will be valuable.Also please help me with some references of the models too.Is it possible to directly predict the next page access with the result of PrefixSpan.If so how?

    Read the article

  • statistics for checking imported data?

    - by user1936
    I'm working on a data migration of several hundred nodes from a Drupal 6 to a Drupal 7 site. I've got the data exported to the new site and I want to check it. Harkening back to my statistics classes, I recall that there is some way to figure out a random number of nodes to check to give me some percentage of confidence that the whole process was correct. Can anyone enlighten me as to this practical application of statistics? For any given number of units, how big must the sample be to have a given confidence interval?

    Read the article

  • Removing mdadm array and converting to regular disks while preserving data

    - by Jeffrey Kevin Pry
    I have a 6 disk (2TB each) mdadm RAID 5 volume created in Ubuntu 12.04 Server. However, I'm moving to a different solution and want to "unraid" my disks but keep the data. Only 50% is in use. From what I can surmise I basically have to do this recursively for each physical disk. Fail the disk Format the failed disk Move a portion of files to the new disk. Reshape the array Shrink the logical volume md0 This seems like a very time consuming process. Is there an easier way to do this (automatically perhaps) without buying new disks to temporarily hold the data? I am also aware that during this processing my RAID volume will be degraded and vulnerable the entire time. I am not too concerned about this and will be using battery backup and moving the most important files off first. Thank you for your help!

    Read the article

  • Strange sound issues with Ubuntu 11.10

    - by DNA
    I am having strange issues with sound and volume with Ubuntu 11.10 I've noticed very often, especially when I get my computer out of sleep, it lacks sound even though the volume slider is cranked all the way up. But when I plug my headphones in and then take them out, it seems to kick start my sound back! What gives? What daemon in /etc/int.d needs to be restared with the restart command to fix this? I have the following in my init.d folder /etc/init.d/alsa-restore and /etc/init.d/alsa-store Also, my start up ubuntu sound doesn't play either....

    Read the article

  • How to refactor when all your development is on branches?

    - by Mark
    At my company, all of our development (bug fixes and new features) is done on separate branches. When it's complete, we send it off to QA who tests it on that branch, and when they give us the green light, we merge it into our main branch. This could take anywhere between a day and a year. If we try to squeeze any refactoring in on a branch, we don't know how long it will be "out" for, so it can cause many conflicts when it's merged back in. For example, let's say I want to rename a function because the feature I'm working on is making heavy use of this function, and I found that it's name doesn't really fit its purpose (again, this is just an example). So I go around and find every usage of this function, and rename them all to its new name, and everything works perfectly, so I send it off to QA. Meanwhile, new development is happening, and my renamed function doesn't exist on any of the branches that are being forked off main. When my issue gets merged back in, they're all going to break. Is there any way of dealing with this? It's not like management will ever approve a refactor-only issue so it has to be squeezed in with other work. It can't be developed directly on main because all changes have to go through QA and no one wants to be the jerk that broke main so that he could do a little bit of non-essential refactoring.

    Read the article

  • What's the best way to version CSS and JS URLs?

    - by David Eyk
    As per Yahoo's much-ballyhooed Best Practices for Speeding Up Your Site, we serve up static content from a CDN using far-future cache expiration headers. Of course, we need to occasionally update these "static" files, so we currently add an infix version as part of the filename (based on the SHA1 sum of the file contents). Thus: styles.min.css Becomes: styles.min.abcd1234.css However, managing the versioned files can become tedious, and I was wondering if a GET argument notation might be cleaner and better: styles.min.css?v=abcd1234 Which do you use, and why? Are there browser- or proxy/cache-related considerations that I should consider?

    Read the article

  • Un-used Indexes on MDP_MATRIX Consuming Resources

    - by user702295
    Disable un-used Indexes: As much as it is recommended to create relevant indexes, it is advised not to have too many indexes on the mdp_matrix table.  Too many indexes will cause long waits on the table as indexes needs to get updated every time the table is updated.  There are many seeded indexes on mdp_matrix, every out of the box data model level has an index on the matrix table.  If a level is unused in the specific data model of the implementation, it is advisable to disable that index.  If the customer is not sure if and how indexes are utilized, the DBA can monitor all indexes.  After a few cycles of operation, the DBA should go over that list and see which indexes have not been used.  Consider disabling them. There are scripts on the net to monitor indexes or use the monitoring usage clause in the alter index statement.

    Read the article

  • How the make the volume change more gradually?

    - by xio
    So I'm currently using Ubuntu 13.04 on a Lenovo Thinkpad R61i laptop and the problem I have is that the actual sound volume doesn't grow linearly with the change of the volume slider position: in the range from 0% to 75% it grows very slowly, but in the range from 75% to 100% does so very rapidly - so that a small change of the slider's position corresponds to an unproportionally large change in volume. What might be the case and how can I fix it? Used to work well on Ubuntu 11.*

    Read the article

  • Persisting natural language processing parsed data

    - by tjb1982
    I've recently started experimenting with natural language processing (NLP) using Stanford's CoreNLP, and I'm wondering what are some of the standard ways to store NLP parsed data for something like a text mining application? One way I thought might be interesting is to store the children as an adjacency list and make good use of recursive queries (Postgres supports this and I've found it works really well). But I assume there are probably many standard ways to do this depending on what kind of analysis is being done that have been adopted by people working in the field over the years. So what are the standard persistence strategies for NLP parsed data and how are they used?

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >