Search Results

Search found 23062 results on 923 pages for 'multiple models'.

Page 334/923 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • Best Way to Archive Digital Photos and Avoid Duplicate File Names

    - by user31575
    This problem pertains to archiving of digital pictures taken from multiple cameras. Answers here covered the general topic of the-mechanics-of-backups: How do you archive digital photos and videos ? I however face another problem. Having multiple cameras (canon) and multiple SD cards (mixed and matched at random), I have found that different SD cards have different photos with the same file name, i.e. two different photos each name IMG_3141.JPG. Additionally, for better or worse, I've backed up the files to multiple places and need to consolidate my backups. I want to eliminate duplicates, but not clobber files. The only way I can think of is to append the code (md5 or sha1) to the file name, i.e. IMG_3141.JPG becomes IMG_3141_KT229QZ31415926ASDF.JPG, then sorting them out Any better ways? (Note "open letter" address the 'duplicate file name' concern): http://photofocus.com/2010/09/13/an-open-letter-to-digital-camera-manufacturers-regarding-camera-file-naming/ )

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Creating Tables in DokuWiki

    - by Bryan
    I'm trying to create a table in DokuWiki, with a cell that vertically spans, however unlike the examples in the syntax guide, the cell I want to create has more than one row of text. The following is an ASCII version of what I'm trying to achieve +-----------+-----------+ | Heading 1 | Heading 2 | +-----------+-----------+ | | Multiple | | Some text | rows of | | | text | +-----------+-----------+ I've tried the following syntax ^ Heading 1 ^ Heading 2 ^ | Some text | Multiple | | ::: | rows of | | ::: | text | but this generates the output +-----------+-----------+ | Heading 1 | Heading 2 | +-----------+-----------+ | | Multiple | | +-----------+ | Some text | rows of | | +-----------+ | | text | +-----------+-----------+ I can't find anything in the DokuWiki documentation, so I'm hoping I'm missing something fundamentally simple?

    Read the article

  • Farseer: Cutting body from texture

    - by Robin Betka
    Is it possible to cut a body from a texture in Farseer 3.0? I have a texture converted to a body with multiple fixtures ( using BayazitDecomposer, CreatePolygon method, ..) and can even do it as a BreakableBody. But when I try to cut it with the cutting tool, the fixture itself gets cutted but it's connections get discarded! So when I have 14 fixtures, and cut fixture 3 for example, fixture 3 gets cutted but 1,2 and 3-14 just go away. Is there a way to do it? It would work already if I could convert the texture into a body with 1 fixture only, but I haven't figured out it that's possible. BayazitDecomposer creates the multiple verticles, but letting it away creates something weird and I get assert messages all the time. I know I couldn't break it that way but I don't need that anyway when I could cut it. The breaking is just the work around I'm using now. Extending the cuttingtool to support multiple fixtures is very hard especially when you consider that in one cut multiple fixtures could be cutted and then connected again.

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Creating Tables in DocuWiki

    - by Bryan
    I'm trying to create a table in DokuWiki, with a cell that vertically spans, however unlike the examples in the syntax guide, the cell I want to create has more than one row of text. The following is an ASCII version of what I'm trying to achieve +-----------+-----------+ | Heading 1 | Heading 2 | +-----------+-----------+ | | Multiple | | Some text | rows of | | | text | +-----------+-----------+ I've tried the following syntax ^ Heading 1 ^ Heading 2 ^ | Some text | Multiple | | ::: | rows of | | ::: | text | but this generates the output +-----------+-----------+ | Heading 1 | Heading 2 | +-----------+-----------+ | | Multiple | | +-----------+ | Some text | rows of | | +-----------+ | | text | +-----------+-----------+ I can't find anything in the DokuWiki documentation, so I'm hoping I'm missing something fundamentally simple?

    Read the article

  • Program that groups windows into tabs

    - by Arithmomaniac
    I recall once stumbling on a program that could take multiple application windows and wrap them inside a large window with a tabbed interface. One use of this, for example, would be to wrap multiple instances of Excel into one window, and thus icon on the taskbar. I couldn't find mention of this program via Google, because of the multiple meanings of the word "window". Does anyone remember, or know of, such a program?

    Read the article

  • Scene or Activity Animation

    - by Siddharth
    My game require an animation when one activity finishes and next started because I have develop game with multiple activity not as multiple scene per game. I have to show animation at the time of activity creation and activity destroy. I have trying to create basic animation that was supported by android. And all that xml file I have to post it into the anim folder but the loading of resource was so much high so any type of animation I provide using android method does not work for me it look weird. If scene class has some functionality for animation that please know me then I try to load different type of animation using scene. I have not create multiple scene because I have no awareness about how to manage multiple scene in andengine though I have a working experience of 8 months in andengine. So this help also provide me a great help. Basically I want to create animation like one activity slide out at the same time the other activity slide in. So at a time user can see the transition of activity. Thanks in advance.

    Read the article

  • Django authentication in django nonrel on GAE

    - by tooba
    I'm using the Django nonrel project on a google app engine project running locally in development. I've created my own models and these are fine when they are saved and retrieved in the datastore. I'm hoping to use django.contrib.auth to provide the user functionality. I can use the shell to create users and these get assigned an ID. When I create one of my own models which references User I have to pass in a user ID as it quite rightly fails otherwise. However, checking via the gae admin interface I can't see the User model in the datastore for the users I've created via the shell. Nor can I retreive the user details from one of my models which references them. Calls to mymodel.user.username return nothing. Nor can I log into admin using the username and password I've set up. I can see saved versions of the models I've made in the gae admin app. I get the impression that users are being created somewhere other than the datastore. Is there something else I need to do to use the standard contrib.auth users with django-nonrel and gae?

    Read the article

  • Why don't RSpec's methods, "get", "post", "put", "delete" work in a controller spec in a gem (or out

    - by ramon.tayag
    I'm not new to Rails or Rspec, but I'm new to making gems. When I test my controllers, the REST methods "get", "post", "put", "delete" give me an undefined method error. Below you'll find code, but if you prefer to see it in a pastie, click here. Thanks! Here's my spec_helper: $LOAD_PATH.unshift(File.dirname(__FILE__)) $LOAD_PATH.unshift(File.join(File.dirname(__FILE__), '..', 'lib')) require 'rubygems' require 'active_support' unless defined? ActiveSupport # Need this so that mattr_accessor will work in Subscriber module require 'active_record/acts/subscribable' require 'active_record/acts/subscriber' require 'action_view' require 'action_controller' # Since we'll be testing subscriptions controller #require 'action_controller/test_process' require 'spec' require 'spec/autorun' # Need active_support to user mattr_accessor in Subscriber module, and to set the following inflection ActiveSupport::Inflector.inflections do |inflect| inflect.irregular 'dorkus', 'dorkuses' end require 'active_record' # Since we'll be testing a User model which will be available in the app # Tell active record to load the subscribable files ActiveRecord::Base.send(:include, ActiveRecord::Acts::Subscribable) ActiveRecord::Base.send(:include, ActiveRecord::Acts::Subscriber) require 'app/models/user' # The user model we expect in the application require 'app/models/person' require 'app/models/subscription' require 'app/models/dorkus' require 'app/controllers/subscriptions_controller' # The controller we're testing #... more but I think irrelevant My subscriptions_spec: require File.expand_path(File.dirname(__FILE__) + '/../spec_helper') describe SubscriptionsController, "on GET index" do load_schema describe ", when only subscribable params are passed" do it "should list all the subscriptions of the subscribable object" end describe ", when only subscriber params are passed" do it "should list all the subscriptions of the subscriber" do u = User.create d1 = Dorkus.create d2 = Dorkus.create d1.subscribe! u d2.subscribe! u get :index, {:subscriber_type = "User", :subscriber_id = u.id} assigns[:subscriptions].should == u.subscriptions end end end My subscriptions controller: class SubscriptionsController The error: NoMethodError in 'SubscriptionsController on GET index , when only subscriber params are passed should list all the subscriptions of the subscriber' undefined method `get' for # /home/ramon/rails/acts_as_subscribable/spec/controllers/subscriptions_controller_spec.rb:21:

    Read the article

  • Good DB Migrations for CakePHP?

    - by Martin Westin
    Hi, I have been trying a few migration scripts for CakePHP but I ran into problems with all of the in some form or another. Please advice me on a migration option for Cake that you use live and know works. I'd like the following "features": -Support CakePHP 1.2(e.g. CakeDCs migrations will only be an option when 1.3 is stable and my app migrated to the new codebase) -Support for (or at least not halt on) Models with a different database config. -Support Models in sub-folders of app/models -Support Models in plugins -Support tables that do not conform to Cake conventions (I have a few special tables that do not have a single primary key field and need to keep them) -Plays well with automated deployment via Capistrano and Git. I do not need rails-style versioned files a git versioned schema file that is compared live to the existing schema will do. That is: I like the SchemaShell in Cake apart from it not being compatible with most of my requirements above. I have looked at and tested: CakePHP Schema Shell http://book.cakephp.org/view/734/Schema-management-and-migrations CakeDC migrations http://cakedc.com/downloads/view/cakephp_migrations_plugin YAML migrations http://github.com/georgious/cakephp-yaml-migrations-and-fixtures joelmoss migrations http://code.google.com/p/cakephp-migrations

    Read the article

  • Django: Complex filter parameters or...?

    - by minder
    This question is connected to my other question but I changed the logic a bit. I have models like this: from django.contrib.auth.models import Group class Category(models.Model): (...) editors = ForeignKey(Group) class Entry(models.Model): (...) category = ForeignKey(Category) Now let's say User logs into admin panel and wants to change an Entry. How do I limit the list of Entries only to those, he has the right to edit? I mean: How can I list only those Entries which are assigned to a Category that in its "editors" field has one of the groups the User belongs to? What if User belongs to several groups? I still need to show all relevant Entries. I tried experimenting with changelist_view() and queryset() methods but this problem is a bit too complex for me. I'm also wondering if granular-permissions could help me with the task, but for now I have no clue. I came up only with this: First I get the list of all Groups the User belongs to. Then for each Group I get all connected Categories and then for each Category I get all Entries that belong to these Categories. Unfortunately I have no idea how to stitch everything together as filter() parameters to produce a nice single QuerySet.

    Read the article

  • Doctrine stupid or me stupid?

    - by ropstah
    I want to use a single Doctrine install on our server and serve multiple websites. Naturally the models should be maintained in the websites' modelsfolder. I have everything up (and not running) like so: Doctrine @ /CustomFramework/Doctrine Websites @ /var/www/vhosts/custom1.com/ /var/www/vhosts/custom2.com/ Generating works fine, all models are delivered in /application_folder/models and /application_folder/models/generated for the correct website. I've added Doctrine::loadModels('path_to_models') in the bootstrap file for each website, and also registered the autoloaded. But.... This is the autoloader code: public static function autoload($className) { if (strpos($className, 'sfYaml') === 0) { require dirname(__FILE__) . '/Parser/sfYaml/' . $className . '.php'; return true; } if (0 !== stripos($className, 'Doctrine_') || class_exists($className, false) || interface_exists($className, false)) { return false; } $class = self::getPath() . DIRECTORY_SEPARATOR . str_replace('_', DIRECTORY_SEPARATOR, $className) . '.php'; if (file_exists($class)) { require $class; return true; } return false; } Am I stupid, or is the autoloader really doing this: $class = self::getPath() . DIRECTORY_SEPARATOR . str_replace('_', DIRECTORY_SEPARATOR, $className) . '.php'; or, in other words: Does it require me to have ALL my generated doctrine classes inside the Doctrine app directory? Or in other words, do I need a single Doctrine installation for each website? I'm getting an error that the BaseXXX class cannot be found. So the autoloading doesn't function correctly. I really hope i'm doing something wrong.. anyone?

    Read the article

  • ObjC: Alloc instance of Class and performing selectors on it leads to __CFRequireConcreteImplementat

    - by Arakyd
    Hi, I'm new to Objective-C and I'd like to abstract my database access using a model class like this: @interface LectureModel : NSMutableDictionary { } -(NSString*)title; -(NSDate*)begin; ... @end I use the dictionary methods setValue:forKey: to store attributes and return these in the getters. Now I want to read these models from a sqlite database by using the Class dynamically. + (NSArray*)findBySQL:(NSString*)sql intoModelClass:(Class)modelClass { NSMutableArray* models = [[[NSMutableArray alloc] init] autorelease]; sqlite3* db = sqlite3_open(...); sqlite3_stmt* result = NULL; sqlite3_prepare_v2(db, [sql UTF8String], -1, &result, NULL); while(sqlite3_step(result) == SQLITE_ROW) { id modelInstance = [[modelClass alloc] init]; for (int i = 0; i < sqlite3_column_count(result); ++i) { NSString* key = [NSString stringWithUTF8String:sqlite3_column_name(result, i)]; NSString* value = [NSString stringWithUTF8String:(const char*)sqlite3_column_text(result, i)]; if([modelInstance respondsToSelector:@selector(setValue:forKey:)]) [modelInstance setValue:value forKey:key]; } [models addObject:modelInstance]; } sqlite3_finalize(result); sqlite3_close(db); return models; } Funny thing is, the respondsToSelector: works, but if I try (in the debugger) to step over [modelInstance setValue:value forKey:key], it will throw an exception, and the stacktrace looks like: #0 0x302ac924 in ___TERMINATING_DUE_TO_UNCAUGHT_EXCEPTION___ #1 0x991d9509 in objc_exception_throw #2 0x302d6e4d in __CFRequireConcreteImplementation #3 0x00024d92 in +[DBManager findBySQL:intoModelClass:] at DBManager.m:114 #4 0x0001ea86 in -[FavoritesViewController initializeTableData:] at FavoritesViewController.m:423 #5 0x0001ee41 in -[FavoritesViewController initializeTableData] at FavoritesViewController.m:540 #6 0x305359da in __NSFireDelayedPerform #7 0x302454a0 in CFRunLoopRunSpecific #8 0x30244628 in CFRunLoopRunInMode #9 0x32044c31 in GSEventRunModal #10 0x32044cf6 in GSEventRun #11 0x309021ee in UIApplicationMain #12 0x00002988 in main at main.m:14 So, what's wrong with this? Presumably I'm doing something really stupid and just don't see it... Many thanks in advance for your answers, Arakyd :..

    Read the article

  • Active Record two belongs_to calls or single table inheritance

    - by ethyreal
    In linking a sports event to two teams, at first this seemed to make sense: events - id:integer - integer:home_team_id - integer:away_team_id teams - integer:id - string:name However I am troubled by how I would link that up in the active record model: class Event belongs_to :home_team, :class_name => 'Team', :foreign_key => "home_team_id" belongs_to :away_team, :class_name => 'Team', :foreign_key => "away_team_id" end Is that the best solution? In an answer to a similar question I was pointed to single table inheritance, and then later found polymorphic associations. Neither of which seemed to fit this association. Perhaps I am looking at this wrong, but I see no need to subclass a team into home and away teams since the distinction is only in where the game is played. If I did go with single table inheritance I wouldn't want each team to belong_to an event so would this work? # app/models/event.rb class Event < ActiveRecord::Base belongs_to :home_team belongs_to :away_team end # app/models/team.rb class Team < ActiveRecord::Base has_many :teams end # app/models/home_team.rb class HomeTeam < Team end # app/models/away_team.rb class AwayTeam < Team end I thought also about a has_many through association but that seems two much as I will only ever need two teams, but those two teams don't belong to any one event. event_teams - integer:event_id - integer:team_id - boolean:is_home Is there a cleaner more semantic way for making these associations in active record? or is one of these solutions the best choice? Thanks

    Read the article

  • Django-South introspection rule doesn't work.

    - by Ory Band
    I'm using Django 1.2.3 and South 0.7.3. I am trying to convert my app (named core) to use Django-South. I have a custom model/field that I'm using, named ImageWithThumbsField. It's basically just the ol' django.db.models.ImageField with some attributes such as height, weight, etc. While trying to ./manage.py convert_to_auth core I receieve South's freezing errors. I have no idea why, I'm Probably missing something... I am using a simple custom Model: from django.db.models import ImageField class ImageWithThumbsField(ImageField): def __init__(self, verbose_name=None, name=None, width_field=None, height_field=None, sizes=None, **kwargs): self.verbose_name=verbose_name self.name=name self.width_field=width_field self.height_field=height_field self.sizes = sizes super(ImageField, self).__init__(**kwargs) And this is my introspection rule, which I add to the top of my models.py: from south.modelsinspector import add_introspection_rules from lib.thumbs import ImageWithThumbsField add_introspection_rules( [ ( (ImageWithThumbsField, ), [], { "verbose_name": ["verbose_name", {"default": None}], "name": ["name", {"default": None}], "width_field": ["width_field", {"default": None}], "height_field": ["height_field", {"default": None}], "sizes": ["sizes", {"default": None}], }, ), ], ["^core/.fields/.ImageWithThumbsField",]) This is the errors I receieve: ! Cannot freeze field 'core.additionalmaterialphoto.photo' ! (this field has class lib.thumbs.ImageWithThumbsField) ! Cannot freeze field 'core.material.photo' ! (this field has class lib.thumbs.ImageWithThumbsField) ! Cannot freeze field 'core.material.formulaimage' ! (this field has class lib.thumbs.ImageWithThumbsField) ! South cannot introspect some fields; this is probably because they are custom ! fields. If they worked in 0.6 or below, this is because we have removed the ! models parser (it often broke things). ! To fix this, read http://south.aeracode.org/wiki/MyFieldsDontWork Does anybody know why? What am I doing wrong?

    Read the article

  • LaTeX printing only first two pages of a document

    - by Peter Flom
    I am working in LaTeX, and when I create a pdf file (using LaTeX button or pdfLaTeX button or using yap) the pdf has only the first two pages. No errors. It just stops. If I make the first page longer by adding text, it still stops at end of 2nd page. Any ideas? OK, responding to first comment, here is the code \documentclass{article} \title{Outline of Book} \author{Peter L. Flom} \begin{document} \maketitle \section*{Preface} \subsection*{Audience} \subsection*{What makes this book different?} \subsection*{Necessary background} \subsection*{How to read this book} \section{Introduction} \subsection{The purpose of logistic regression} \subsection{The need for logistic regression} \subsection{Types of logistic regression} \section{General issues in logistic regression} \subsection{Transforming independent and dependent variables} \subsection{Interactions} \subsection{Model selection} \subsection{Parameter estimates, confidence intervals, p values} \subsection{Summary and further reading} \section{Dichotomous logistic regression} \subsection{Introduction, theory, examples} \subsection{Exploratory plots and analysis} \subsection{Basic model fitting} \subsection{Advanced and special issues in model fitting} \subsection{Diagnostic and descriptive plots and analysis} \subsection{Traps and gotchas} \subsection{Power analysis} \subsection{Summary and further reading} \subsection{Exercises} \section{Ordinal logistic regression} \subsection{Introduction, theory, examples} \subsubsection{Introduction - what are ordinal variables?} \subsubsection{Theory of the model} \subsubsection{Examples for this chapter} \subsection{Exploratory plots and analysis} \subsection{Basic model fitting} \subsection{Advanced and special issues in model fitting} \subsection{Diagnostic and descriptive plots and analysis} \subsection{Traps and gotchas} \subsection{Power analysis} \subsection{Summary and further reading} \subsection{Exercises} \section{Multinomial logistic regression} \subsection{Introduction, theory, examples} \subsection{Exploratory plots and analysis} \subsection{Basic model fitting} \subsection{Advanced and special issues in model fitting} \subsection{Diagnostic and descriptive plots and analysis} \subsection{Traps and gotchas} \subsection{Power analysis} \subsection{Summary and further reading} \subsection{Exercises} \section{Choosing a model} \subsection{NOIR and its problems} \subsection{Linear vs. ordinal} \subsection{Ordinal vs. multinomial} \subsection{Summary and further reading} \subsection{Exercises} \section{Extensions and related models} \subsection{Other logistic models} \subsection{Multilevel models - PROC NLMIXED and GLIMMIX} \subsection{Loglinear models - PROC CATMOD} \section{Summary} \end{document} thanks Peter

    Read the article

  • zend framework - quickstart application

    - by m gahagan
    I have been attempting to install the 'quickstart' tutorial application on my system. After a considerable amount of frustration - a) because I dont know how it all works andb) mine's a windows (wamp) set-up, I have got as far as setting up the guestbook database successfully and reaching the Checkpoint: Now browse to http://localhost/guestbook. You should see the following in your browser: I get error: Warning: include(C:\wamp\www\quickstart\application/models//GuestbookMapper.php) [function.include]: failed to open stream: No such file or directory in C:\wamp\www\quickstart\library\Zend\Loader\Autoloader\Resource.php on line 176 Warning: include() [function.include]: Failed opening 'C:\wamp\www\quickstart\application/models//GuestbookMapper.php' for inclusion (include_path='C:\wamp\www\quickstart\library;.;C:\wamp\bin\php\php5.3.0\PEAR;C:\wamp\zend\library;C:\wamp\zend\extras\library') in C:\wamp\www\quickstart\library\Zend\Loader\Autoloader\Resource.php on line 176 Fatal error: Class 'Default_Model_GuestbookMapper' not found in C:\wamp\www\quickstart\application\models\Guestbook.php on line 102 Obviously failing to link relevant files is the main issue - First, that 'C:\wamp\www\quickstart\application/models//GuestbookMapper.php' looks wrong to me, but I cant figure out what's creating it. Second, I have a very tenuous grip on the whole file path system and cant tell whether things are wrongly configured. If I could get the guestbook app to function, then I might be able to get a grip on what's going on. As it is, I seem to fix one problem only to find another round the next corner.

    Read the article

  • Schema-less design guidelines for Google App Engine Datastore and other NoSQL DBs

    - by jamesaharvey
    Coming from a relational database background, as I'm sure many others are, I'm looking for some solid guidelines for setting up / designing my datastore on Google App Engine. Are there any good rules of thumb people have for setting up these kinds of schema-less data stores? I understand some of the basics such as denormalizing since you can't do joins, but I was wondering what other recommendations people had. The particular simple example I am working with concerns storing searches and their results. For example I have the following 2 models defined in my Google App Engine app using Python: class Search(db.Model): who = db.StringProperty() what = db.StringProperty() where = db.StringProperty() createDate = db.DateTimeProperty(auto_now_add=True) class SearchResult(db.Model): title = db.StringProperty() content = db.StringProperty() who = db.StringProperty() what = db.StringProperty() where = db.StringProperty() createDate = db.DateTimeProperty(auto_now_add=True) I'm duplicating a bunch of properties between the models for the sake of denormalization since I can't join Search and SearchResult together. Does this make sense? Or should I store a search ID in the SearchResult model and effectively 'join' the 2 models in code when I retrieve them from the datastore? Please keep in mind that this is a simple example. Both models will have a lot more properties and the way I'm approaching this right now, I would put any property I put in the Search model in the SearchResult model as well.

    Read the article

  • Django aggregation on a date range

    - by klaut
    Hi all, I have been lurking and learning in here for a while. Now i have a problem that somehow i cannot see an easy solution. In order to learn django i am bulding an app that basically keeps track of booked items. What I would like to do is to show how many days per month for a selected year one item has been booked. i have the following models: Asset(Model) BookedAsset(Model): asset = models.ForeignKey(Asset) startdate = models.DateField() enddate = models.DateField() So having the following entries: asset 1, 2010-02-11, 2010-02-13 asset 2, 2010-03-12, 2010-03-14 asset 1, 2010-04-30, 2010-05-01 I would like to get returned the following: asset 1 asset 2 ------- ------- Jan = 0 Jan = 0 Feb = 2 Feb = 0 Mar = 0 Mar = 2 Apr = 1 Apr = 0 May = 1 May = 0 Jun = 0 Jun = 0 Jul = 0 Jul = 0 Aug = 0 Aug = 0 Sep = 0 Sep = 0 Oct = 0 Oct = 0 Nov = 0 Nov = 0 Dec = 0 Dec = 0 I know i need to first get the number of days in a date range (and keep track if they fall out of the current month and into the next month) and then do an agregate on the number of days. I am just stuck on how to do it elegantly in Django. Any help (or hint in the right direction) is greatly appreciated.

    Read the article

  • Designing DAOs around a JSON API for iPhone Development

    - by Bob Spryn
    So I've been trying to design a clean way of grabbing data for my models in iPhone land. All the data for my application is coming from JSON API's. So right now when a VC needs some models, it does the JSON call itself (asynch) and when it receives the data, it builds the models. It works, but I'm trying to think of a cleaner method whereby the DAO's retrieve the information for me and return the models, all in an async manner. My initial thought is build a protocol for my DAOs, such that the VC would instantiate a DAO and make itself the delegate. When you requested data [DAOinstance getAllUsers] the DAO would do all the network request stuff, and then when it had the data, it would call a method on its delegate (the VC) to pass the data. So I think that's a cool solution, but realized that if I needed to use the same DAO for different purposes in the same VC, my delegate method would have to branch logic depending on which DAO instance initiated the request. So my second thought was to be able to pass 'handler' selectors to the DAO object a la typical javascript patterns. So instead of an official protocol, I would say something like [DAOinstance getAllUsersWithSelector:"TheHandlerFunctionOnMyVC:"] Then when the DAO completed its network activities, it would call the passed selector on the VC, and pass the data back. So am I headed in the wrong direction entirely here? Seems like maybe an ok way to go. Any pointers or articles on designing this kind of data layer would be sweet. Thanks! Bob

    Read the article

  • Modelling deterministic and nondeterministic data separately

    - by Superstringcheese
    I'm working with the Microsoft ADO.NET Entity Framework for a game project. Following the advice of other posters on SO, I'm considering modelling deterministic and nondeterministic data separately. The idea for this came from a discussion on multiplayer games, but it seemed to make sense in a single-player scenario as well. Deterministic (things that aren't going to change during gameplay) Attributes (Strength, Agility, etc.) and their descriptions Skills and their descriptions and requirements Races, Factions, Equipment, etc. Base Attribute/Skill/Equipment loadouts for monsters Nondeterministic (things that will change a lot during gameplay) Beings' current AttributeModifers (Potion of Might = +10 Strength), current health and mana, etc. Player inventory, cash, experience, level Player quests states Player FactionRelationships ...and so on. My deterministic model would serve as a set of constants. My nondeterministic model would provide my on-the-fly operable data and would be serialized to a savegame file to maintain game state between play sessions. The data store will be an embedded SQL Compact database. So I might want to create relations between my Attributes table (deterministic model) and my BeingAttributeModifiers table (nondeterministic model), but how do I set that up across models? Det model/db Nondet model/db ____________ ________________________ |Attributes | |PlayerAttributeModifiers| |------------| |------------------------| |Id | |Id | |Name | |AttributeId | |Description | |SourceId | ------------ |Value | ------------------------ Should I use two separate models (edmx) that transact with a single database containing both deterministic-type and nondeterministic-type tables? Or should/can I use two separate databases in one model? Or two models each with their own database? With distinct models/dbs it seems like this will get really complicated and I'll end up fighting EF a lot, rolling my own transaction code, and generally losing out on a lot of the advantages of the framework. I know these are vague questions, I'm just looking for a sanity check before I forge ahead any further.

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >