Search Results

Search found 52589 results on 2104 pages for 'read table'.

Page 141/2104 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Virgin STI Help

    - by Mutuelinvestor
    I am working on a horse racing application and I'm trying to utilize STI to model a horse's connections. A horse's connections is comprised of his owner, trainer and jockey. Over time, connections can change for a variety of reasons: The horse is sold to another owner The owner switches trainers or jockey The horse is claimed by a new owner As it stands now, I have model this with the following tables: horses connections (join table) stakeholders (stakeholder has three sub classes: jockey, trainer & owner) Here are my clases and associations: class Horse < ActiveRecord::Base has_one :connection has_one :owner_stakeholder, :through => :connection has_one :jockey_stakeholder, :through => :connection has_one :trainer_stakeholder, :through => :connection end class Connection < ActiveRecord::Base belongs_to :horse belongs_to :owner_stakeholder belongs_to :jockey_stakeholder belongs_to :trainer_stakeholder end class Stakeholder < ActiveRecord::Base has_many :connections has_many :horses, :through => :connections end class Owner < Stakeholder # Owner specific code goes here. end class Jockey < Stakeholder # Jockey specific code goes here. end class Trainer < Stakeholder # Trainer specific code goes here. end One the database end, I have inserted a Type column in the connections table. Have I modeled this correctly. Is there a better/more elegant approach. Thanks in advance for you feedback. Jim

    Read the article

  • Read/Write protected memory?

    - by PenguinBoy
    I'm trying to learn C++ currently, but I'm having issues with the code below. class Vector2 { public: double X; double Y; Vector2(double X, double Y) { this->X = X; this->Y = Y; }; SDL_Rect * getSdlOffset() { SDL_Rect * offset = new SDL_Rect(); offset->x = this->X; offset->y = this->Y; return offset; }; }; Visual studio throws throw the following error when calling getSdlOffset() An unhandled exception of type 'System.AccessViolationException' occurred in crossEchoTest.exe Additional information: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. I've got a C#/java background, and I'm lost... Any help would be much appreciated.

    Read the article

  • mongodb read/write performance and mongo hosting in the cloud

    - by z3cko
    we are currently developing a high traffic rails application with facebooker (facebook game). since amazon simpledb (aws-sdb) is really slow, we are thinking of using a dedicated mongodb server as offered by mongoHQ for example. questions: what is the read/writes peak value for a mongodb server running on a amazon ec2 instance? what would be a recommended setup for a ec2 hosted app with mongodb - a master on amazon EBS and replicas on the ec2 instances? any examples or experiences? is there a company that offers mongodb hosting in the cloud? thanks, mz

    Read the article

  • acts_as_xapian jobs table

    - by Grnbeagle
    Hi, Can someone explain to me the inner workings of acts_as_xapian_jobs table? I ran into an issue with the acts_as_xapian plugin recently, where I kept getting the following error when it creates an object with xapian indexed fields: Mysql::Error: Duplicate entry 'String-2147483647' for key 2: INSERT INTO `acts_as_xapian_jobs` (`action`, `model`, `model_id`) VALUES ('update', 'String', 23730251831560) It turns out the model_id exceeded the max int value of 2147483647. The workaround was to update model_id to use bigint. Why would the model_id be so huge? By looking at content of acts_as_xapian_jobs, it seems it creates a row for every field that is being indexed.. Understanding how a job gets created in the table would help a great deal. Here's a sampling of the table: mysql> select * from acts_as_xapian_jobs limit 5\G *************************** 1. row *************************** id: 19 model: String model_id: 23804037900560 action: update *************************** 2. row *************************** id: 49 model: String model_id: 23804037191200 action: update *************************** 3. row *************************** id: 79 model: String model_id: 23804037932180 action: update *************************** 4. row *************************** id: 109 model: String model_id: 23804037101700 action: update *************************** 5. row *************************** id: 139 model: String model_id: 23804037722160 action: update Thanks in advance, Amie

    Read the article

  • Code don't work, can't read property 'className' of undefined

    - by Arlen Beiler
    What is wrong with this code? var divarray = []; var articleHTML = []; var absHTML; var keyHTML; var bodyHTML = []; var i = 0; divarray = document.getElementById("yui-main").getElementsByTagName("div"); for ( var j in divarray) { if(divarray[i].className == "articleBody"){ alert("found"); articleHTML = divarray[i]; break; } bodyHTML[i] = ''; if(articleHTML[i].className == "issueMiniFeature"){continue;} if(articleHTML[i].className == "abstract"){absHTML = articleHTML[i]; continue;} if(articleHTML[i].className == "journalKeywords"){keyHTML = articleHTML[i]; continue;} bodyHTML[i] = articleHTML[i]; i++; } The error I get is: TypeError: Cannot read property 'className' of undefined I am using Google Chrome.

    Read the article

  • NULL-keys for key/value table

    - by user72185
    (Using Oracle) I have a table with key/value pairs like this: create table MESSAGE_INDEX ( KEY VARCHAR2(256) not null, VALUE VARCHAR2(4000) not null, MESSAGE_ID NUMBER not null ) I now want to find all the messages where key = 'someKey' and value is 'val1', 'val2' or 'val3' - OR value is null in which case there will be no entry in the table at all. This is to save space; there would be a large number of keys with null values if I stored them all. I think this works: SELECT message_id FROM message_index idx WHERE ((key = 'someKey' AND value IN ('val1', 'val2', 'val3')) OR NOT EXISTS (SELECT 1 FROM message_index WHERE key = 'someKey' AND idx.message_id = message_id)) But is is extremely slow. Takes 8 seconds with 700K records in message_index and there will be many more records and more search criteria when moving outside of my test environment. Primary key is key, value, message_id: add constraint PK_KEY_VALUE primary key (KEY, VALUE, MESSAGE_ID) And I added another index for message_id, to speed up searching for missing keys: create index IDX_MESSAGE_ID on MESSAGE_INDEX (MESSAGE_ID) I will be doing several of these key/value lookups in every search, not just one as shown above. So far I am doing them nested, where output id's of one level is the input to the next. E.g.: SELECT message_id from message_index WHERE (key/value compare) AND message_id IN ( SELECT ... and so on ) What can I do to speed this up?

    Read the article

  • FluentNHibernate, getting 1 column from another table

    - by puffpio
    We're using FluentNHibernate and we have run into a problem where our object model requires data from two tables like so: public class MyModel { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual int FooId { get; set; } public virtual string FooName { get; set; } } Where there is a MyModel table that has Id, Name, and FooId as a foreign key into the Foo table. The Foo tables contains Id and FooName. This problem is very similar to another post here: http://stackoverflow.com/questions/1896645/nhibernate-join-tables-and-get-single-column-from-other-table but I am trying to figure out how to do it with FluentNHibernate. I can make the Id, Name, and FooId very easily..but mapping FooName I am having trouble with. This is my class map: public class MyModelClassMap : ClassMap<MyModel> { public MyModelClassMap() { this.Id(a => a.Id).Column("AccountId").GeneratedBy.Identity(); this.Map(a => a.Name); this.Map(a => a.FooId); // my attempt to map FooName but it doesn't work this.Join("Foo", join => join.KeyColumn("FooId").Map(a => a.FooName)); } } with that mapping I get this error: The element 'class' in namespace 'urn:nhibernate-mapping-2.2' has invalid child element 'join' in namespace 'urn:nhibernate-mapping-2.2'. List of possible elements expected: 'joined-subclass, loader, sql-insert, sql-update, sql-delete, filter, resultset, query, sql-query' in namespace 'urn:nhibernate-mapping-2.2'. any ideas?

    Read the article

  • Trying to convert existing production database table columns from enum to VARCHAR (Rails)

    - by dchua
    Hi everyone, I have a problem that needs me to convert my existing live production (I've duplicated the schema on my local development box, don't worry :)) table column types from enums to a string. Background: Basically, a previous developer left my codebase in absolute shit, migration versions are extremely out of date, and apparently he never used it after a certain point of time in development and now that I'm tasked with migrating a rails 1.2.6 app to 2.3.5, I can't get the tests to run properly on 2.3.5 because my table columns have ENUM column types and they convert to :string, :limit = 0 on my schema.rb which creates the problem of an invalid default value when doing a rake db:test:prepare, like in the case of: Mysql::Error: Invalid default value for 'own_vehicle': CREATE TABLE `lifestyles` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `member_id` int(11) DEFAULT 0 NOT NULL, `own_vehicle` varchar(0) DEFAULT 'Y' NOT NULL, `hobbies` text, `sports` text, `AStar_activities` text, `how_know_IRC` varchar(100), `IRC_referral` varchar(200), `IRC_others` varchar(100), `IRC_rdrive` varchar(30)) ENGINE=InnoDB I'm thinking of writing a migration task that looks through all the database tables for columns with enum and replace it with VARCHAR and I'm wondering if this is the right way to approach this problem. I'm also not very sure how to write it such that it would loop through my database tables and replace all ENUM colum_types with a VARCHAR. References [1] https://rails.lighthouseapp.com/projects/8994/tickets/997-dbschemadump-saves-enum-columns-as-varchar0-on-mysql [2] http://dev.rubyonrails.org/ticket/2832

    Read the article

  • question about select() and range of for loop to read sockets

    - by Fantastic Fourier
    Just a quick question about using select(). I'm using select() to read from multiple sockets. When I looked up examples on how to use select(), a lot of tutorials showed going through for loops and checking for FD_ISSET. The problem I have with those tutorials is that the for loop starts from i = 0. and checks if the bit has been set for the file descriptor i using FD_ISSET. Couldn't the for loop start from say your minfd (just like how you would keep track of maxfd)?? Or am I missing something?? Following link is an example of such for loop (look at the fourth example that he gives) http://www.developerweb.net/forum/showthread.php?t=2933 If that was the only example out there that used such for loop, I might understand it was a mistake or bad coding but I've seen several examples of such for loop of uselessly going through literally thousands of sockets and I'm convinced I'm missing something. Any comments or inputs are appreciated.

    Read the article

  • jQuery dynamically created table rows not responding to events

    - by mithunmo
    I have a table with one row in my HTML file. Now in jQuery I append lot of rows based on the data in the database. But the anchor tag which is created dynamically doesn't respond to events. How do I make this anchor tag respond to events? jQuery part //Appending rows from the Database $('<tr> <td> </td> <td id="log'+i+'"><a href="viewlog.php" target="_blank">log</a></td></tr> ').appendTo('#container'); Event part in jQuery Now when I write a line of code like this it does not respond. $('#container a ').click(function() { alert("Coming here "); }); //HTML <html> <table id="container"> <tr> <td> Name </td> <td> Link </td> </tr> </table> </html>

    Read the article

  • Many to many table design question

    - by user169867
    Originally I had 2 tables in my DB, [Property] and [Employee]. Each employee can have 1 "Home Property" so the employee table has a HomePropertyID FK field to Property. Later I needed to model the situation where despite having only 1 "Home Property" the employee did work at or cover for multiple properties. So I created an [Employee2Property] table that has EmployeeID and PropertyID FK fields to model this many 2 many relationship. Now I find that I need to create other many-to-many relationships between employees and properties. For example if there are multiple employees that are managers for a property or multiple employees that perform maintenance work at a property, etc. My questions are: 1) Should I create seperate many-to-many tables for each of these situations or should I just create 1 more table like [PropertyAssociatonType] that lists the types of associations an emploee can have with a property and just add a FK field to [Employee2Property] such a PropertyAssociationTypeID that explains what the association is? I'm curious about the pros/cons or if there's another better way. 2) Am I stupid and going about this all worng? Thanks for any suggestions :)

    Read the article

  • Read tab delimited text file into MySQL table with PHP

    - by Simon S
    I am trying to read in a series of tab delimited text files into existing MySQL tables. The code I have is quite simple: $lines = file("import/file_to_import.txt"); foreach ($lines as $line_num => $line) { if($line_num > 1) { $arr = explode("\t", $line); $sql = sprintf("INSERT INTO my_table VALUES('%s', '%s', '%s', %s, %s);", trim((string)$arr[0]), trim((string)$arr[1]), trim((string)$arr[2]), trim((string)$arr[3]), trim((string)$arr[4])); mysql_query($sql, $database) or die(mysql_error()); } } But no matter what I do (hence the casting before each variable in the sprintf statement) I get the "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 1" error. I echo out the code, paste it into a MySQL editor and it runs fine, it just won't execute from the PHP script. What am I doing wrong?? Si

    Read the article

  • how to retrieve data in image uri in sd card to read data in byte convertion

    - by narasimha
    hi sir i am implementing upload image click upload button then select into sdcard images i am getting uri File Img = new File(selectedImage.getPath()); System.out.println("2............."+Img); FileInputStream is = null; try { is = new FileInputStream(Img); is.read(buffer); BufferedInputStream bis = new BufferedInputStream(is); Bitmap bm = BitmapFactory.decodeStream(is); bis.close(); is.close(); i got in image uri how can retrieve data in particular image uri in image path above code Img i am new in android then some suggition give me some reply

    Read the article

  • XSLT creating a table with varying amount of columns

    - by H4mm3rHead
    Hi, I have a RSS feed i need to display in a table (its clothes from a webshop system). The images in the RSS vary in width and height and I want to make a table that shows them all. First off i would be happy just to show all of the items in 3 columns, but further down the road i need to be able to specify through a parameter the amount of columns in my table. I run into a problem showing the tr tag, and making it right, this is my Code so far: <xsl:template match="item"> <xsl:choose> <xsl:when test="position() mod 3 = 0 or position()=1"> <tr> <td> <xsl:value-of select="title"/> </td> </tr> </xsl:when> <xsl:otherwise> <td> <xsl:value-of select="title"/> </td> </xsl:otherwise> </xsl:choose> </xsl:template> In RSS all "item" tags are on the same level in the xml, and thus far i need only the title shows. Thr problem seems to be that i need to specify the start tag as well as the end tag of the tr element, and cant get all 3 elements into it, anyone know how to do this?

    Read the article

  • CakePHP: Missing database table

    - by Justin
    I have a CakePHP application that is running fine locally. I uploaded it to a production server and the first page that uses a database connection gives the "Missing Database Table" error. When I look at the controller dump, it's complaining about the first table. I've tried a variety of things to fix this problem, with no luck: I've confirmed that at the command line I can login with the given MySQL credentials in database.php I've confirmed this table exists I've tried using the MySQL root credentials (temporarily) to see if the problem lies with permissions of the user. The same error appeared. My debug level is currently set to 3 I've deleted the entire contents of /app/tmp/cache I've set 777 permissions on /app/tmp* I've confirmed that I can run DESCRIBE commands at the commant line MySQL when logged in with the MySQL credentials used by by the application I've verified that the CakePHP log file only contains the error I'm setting in the browser window. I've tried all the suggestions I could find in similar postings on SO I've Googled around and didn't find any other ideas I think I've eliminating the obvious problems and my research isn't turning anything up. I feel like I'm missing something obvious. Any ideas?

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • Optional read from STDIN in C

    - by Yuval A
    My application is basically a shell which expects an input of type cmd [x], where cmd is constant and x is optional. So cmd 1 is legal as well as cmd by itself - then I assume a default parameter for x. I am doing this: char cmd[64]; scanf("%s", cmd); int arg; scanf("%d", &arg); // but this should be optional How can I read the integer parameter, and set it to a default if none is currently available in the prompt? I do not want the prompt to wait for additional input if it was not given in the original command. I tried several versions using fgetc() and getchar() and comparing them to EOF but to no avail. Each version I tried ends up waiting on that optional integer parameter.

    Read the article

  • A many-to-many relation joined disallows intellisense/lookup in joined table

    - by BerggreenDK
    I want to be able to select a product and retrieve all sub-parts(products) within it. My approach is to find the Product id and then retrieve the list of ProductParts with that as a parent and while fetching those, follow the key back to the Product child to get the name and details of each Part. I was hoping to use something similar to: part.linked_product_id.name I have two tables. One for [Product] and and a relation [ProductPart] that has two FK ID's to [Product] Table Product() { int ID; // (PRIMARY, NOT NULL) String Name; ... details removed for overview purpose... } Table ProductPart() { int Product_ID; // FK (linked with relation to Product/parent) int Part_Product_ID; // FK (linked with relation to Product/childen) ... details removed for overview purpose... } I have checked the SQL-diagram and it shows the two relations (both are one to many) and in my DBML they also looks right. Here is my LINQ to SQL snippet that doesnt work for me... wondering why my joins dont work as supposed. FaultySnippet() { ProductDataContext db = new ProductDataContext(); var list = ( from part in db.ProductParts join prod in db.Products on part.Part_Product_ID equals prod.ID where (part.Product_ID == product_ID) select new { Name = part.Part_Product_ID. ?? // <-- NO details from Joined table? ... rest of properties from ProductPart join... I hoped... } ); }

    Read the article

  • What is a read only collection in C#?

    - by acidzombie24
    I ran a security code analyst i found myself having a CA2105 warning. I looked at the grade tampering example. I didnt realize you can assign int[] to a readonly int. I thought readonly was like the C++ const and makes it illegal. The How to Fix Violations suggest i clone the object (which i dont want to do) or 'Replace the array with a strongly typed collection that cannot be changed'. I clicked the link and see 'ArrayList' and adding each element one by one and it doesnt look like you can prevent something adding more. So when i have this piece of code what is the easiest or best way to make it a read only collection? public static readonly string[] example = { "a", "b", "sfsdg", "sdgfhf", "erfdgf", "last one"};

    Read the article

  • Are TestContext.Properties read only ?

    - by DBJDBJ
    Using Visual Studio generate Test Unit class. Then comment out class initialization method. Inside it add your property, using the testContext argument. //Use ClassInitialize to run code before running the first test in the class [ClassInitialize()] public static void MyClassInitialize(TestContext testContext) { /* * Any user defined testContext.Properties * added here will be erased upon this method exit */ testContext.Properties.Add("key", 1 ) ; // above works but is lost } After leaving MyClassInitialize, properties defined by user are lost. Only the 10 "official" ones are left. This effectively means TestContext.Properties is read only, for users. Which is not clearly documented in MSDN. Please discuss. --DBJ

    Read the article

  • iPhone xcode - Activity Indicator with tab bar controller and multi table view controllers

    - by Frames84
    I've looked for a tutorial and can't seem to find one for a activity indicator in a table view nav bar. in my mainWindow.xib I have a Tab Bar Controller with 4 Tabs controllers each containing a table view. each load JSON feeds using the framework hosted at Google. In one of my View Controller I can add an activity indicator to the nav bar by using: UIActivityIndicatorView *activityIndcator = [[UIActivityIndicatorView alloc] initWithFrame:CGRectMake(0,0,20,20)]; [activityIndcator startAnimating]; UIBarButtonItem *activityItem = [[UIBarButtonItem alloc] initWithCustomView:activityIndcator]; self.navigationItem.rightBarButtonItem = activityItem; however and can turn it off by using: self.navigationItem.rightBarButtonItem.enabled = FALSE; But if i place this in my viewDidLoad event it shows all the time. I only want it to show when I select a row in my table view. so I added it at the top of didSelectRowAtIndexPath and the stop line after I load a feed. it shows but takes a second or two and only shows for about half a second. is the an event that firers before the didSelectRowAtIndexPath event a type of loading event? if not what is the standard menthord for implementing such functionality?

    Read the article

  • How can I get read-ahead bytes?

    - by Bruno Martinez
    Operating systems read from disk more than what a program actually requests, because a program is likely to need nearby information in the future. In my application, when I fetch an item from disk, I would like to show an interval of information around the element. There's a trade off between how much information I request and show, and speed. However, since the OS already reads more than what I requested, accessing these bytes already in memory is free. What API can I use to find out what's in the OS caches? Alternatively, I could use memory mapped files. In that case, the problem reduces to finding out whether a page is swapped to disk or not. Can this be done in any common OS?

    Read the article

  • Populating a foreign key table with variable user input

    - by Vincent
    I'm working on a website that will be based on user contributed data, submitted using a regular HTML form. To simplify my question, let's say that there will be two fields in the form: "User Name" and "Country" (this is just an example, not the actual site). There will be two tables in the database : "countries" and "users," with "users.country_id" being a foreign key to the "countries" table (one-to-many). The initial database will be empty. Users from all over the world will submit their names and the countries they live in and eventually the "countries" table will get filled out with all of the country names in the world. Since one country can have several alternative names, input like Chile, Chili, Chilli will generate 3 different records in the countries table, but in fact there is only one country. When I search for records from Chile, Chili and Chilli will not be included. So my question is - what would be the best way to deal with a situation like this, with conditions such that the initial database is empty, no other resources are available and everything is based on user input? How can I organize it in such way that Chile, Chili and Chilli would be treated as one country, with minimum manual interference. What are the best practices when it comes to normalizing user submitted data and is there a scientific term for this? I'm sure this is a common problem. Again, I used country names just to simplify my question, it can be anything that has possible different spellings.

    Read the article

  • Excel File opened under Ubuntu, read by R, OpenOffice

    - by Vishal Belsare
    I have a bunch of Excel files which are updated on a daily basis on a Windows machine. I transfer them to a Ubuntu machine and want to open them there. Specifically, I want to read the files as a database under R. A couple years ago, I used ODBC under a Windows machine to open Excel files through R. Is there any way, I can do this with R under Ubuntu? I could make a database .ODB file for the corresponding XLS files using OpenOffice, but I don't know of a way to connect to a .ODB database. OpenOffice seems to have ways to connect TO databases, but no way to connect to ODB. Thanks for any potential solutions.

    Read the article

  • iPhone / NSArray : How do I format a text file to be read in using arrayWithContentsOfFile

    - by nickthedude
    I have several large word lists and I had been loading them in place in the code as follows: NSArray *dict3 = [[NSArray alloc] initWithObjects:@"abled",@"about",@"above",@"absurd",@"absurdity", ... but now it is actually causing an exc_bad_access error weirdly, i know it seems implausible but for some reason IT IS causing it. (trust me on this I just spend the better part of a day debugging this crap) Anyway, I'm looking to get these humongous lines of code out of my files but I'm not sure what the best approach is. I guess I could do a plist but I need to figure out how to automate the process. it be easiest if I could just use the text files I've been compiling so if anyone knows how I can format the text file so that the myArray = [NSArray arrayWithContentsOfFile: @"myTextFile.txt"]; will be read in correctly as one word per element in the array it would really be appreciated. Thanks, Nick

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >