Search Results

Search found 21317 results on 853 pages for 'key mapping'.

Page 413/853 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • NHibernate ManyToMany Relationship Cascading AllDeleteOrphan StackOverflowException

    - by Chris
    I have two objects that have a ManyToMany relationship with one another through a mapping table. Though, when I try to save it, I get a stack overflow exception. The following is the code for the mappings: //EventMapping.cs HasManyToMany(x => x.Performers).Table("EventPerformer").Inverse().Cascade.AllDeleteOrphan().LazyLoad().ParentKeyColumn("EventId").ChildKeyColumn("PerformerId"); //PerformerMapping.cs HasManyToMany<Event>(x => x.Events).Table("EventPerformer").Inverse().Cascade.AllDeleteOrphan().LazyLoad().ParentKeyColumn("PerformerId").ChildKeyColumn("EventId"); When I change the performermapping.cs to Cascade.None() I get rid of the exception but then my Event Object doesn't have the performer I associate with it. //In a unit test, paraphrased event.Performers.Add(performer); //Event eventRepository.Save<Event>(event); eventResult = eventRepository.GetById<Event>(event.id); //Event eventResult.Performers[0]; //is null, should have performer in it How should I be writing this properly? Thanks

    Read the article

  • Oracle why does creating trigger fail when there is a field called timestamp?

    - by Omar Kooheji
    I've just wasted the past two hours of my life trying to create a table with an auto incrementing primary key bases on this tutorial, The tutorial is great the issue I've been encountering is that the Create Target fails if I have a column which is a timestamp and a table that is called timestamp in the same table... Why doesn't oracle flag this as being an issue when I create the table? Here is the Sequence of commands I enter: Creating the Table: CREATE TABLE myTable (id NUMBER PRIMARY KEY, field1 TIMESTAMP(6), timeStamp NUMBER, ); Creating the Sequence: CREATE SEQUENCE test_sequence START WITH 1 INCREMENT BY 1; Creating the trigger: CREATE OR REPLACE TRIGGER test_trigger BEFORE INSERT ON myTable REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT test_sequence.nextval INTO :NEW.ID FROM dual; END; / Here is the error message I get: ORA-06552: PL/SQL: Compilation unit analysis terminated ORA-06553: PLS-320: the declaration of the type of this expression is incomplete or malformed Any combination that does not have the two lines with a the word "timestamp" in them works fine. I would have thought the syntax would be enough to differentiate between the keyword and a column name. As I've said I don't understand why the table is created fine but oracle falls over when I try to create the trigger... CLARIFICATION I know that the issue is that there is a column called timestamp which may or may not be a keyword. MY issue is why it barfed when I tried to create a trigger and not when I created the table, I would have at least expected a warning. That said having used Oracle for a few hours, it seems a lot less verbose in it's error reporting, Maybe just because I'm using the express version though. If this is a bug in Oracle how would one who doesn't have a support contract go about reporting it? I'm just playing around with the express version because I have to migrate some code from MySQL to Oracle.

    Read the article

  • How do you align text so it is in the middle of a button in Silverlight?

    - by Roy
    Here is the current code I am using: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="ButtonPrototype.MainPage" Width="640" Height="480"> <UserControl.Resources> <ControlTemplate x:Key="CellTemplate" TargetType="Button"> <Grid> <Border x:Name="CellBorderBrush" BorderBrush="Black" BorderThickness="1"> <ContentPresenter Content="{TemplateBinding Content}" HorizontalAlignment="Center" VerticalAlignment="Center"/> </Border> </Grid> </ControlTemplate> <Style x:Key="CellStyle" TargetType="Button"> <Setter Property="Template" Value="{StaticResource CellTemplate}"></Setter> <Setter Property="Foreground" Value="Black"></Setter> <Setter Property="FontSize" Value="80"></Setter> <Setter Property="Width" Value="100"></Setter> <Setter Property="Height" Value="100"></Setter> </Style> </UserControl.Resources> <Grid x:Name="LayoutRoot" Background="White"> <Button Content="A" Style="{StaticResource CellStyle}"></Button> </Grid> </UserControl> The Horizontal aligning works but the verticalalignment doesn't do anything. Thanks for your help.

    Read the article

  • Strange exception while using linq2sql

    - by zerkms
    Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentNullException: Value cannot be null. Parameter name: mapping Source Error: Line 45: #endregion Line 46: Line 47: public db() : Line 48: base(global::data.Properties.Settings.Default.nanocrmConnectionString, mappingSource) Line 49: { this is what i get if i implement such class: partial class db { static db _db = new db(); public static db GetInstance() { return _db; } } db is a linq2sql datacontext why this hapenned and how to solve this?

    Read the article

  • How do you replace an entire xaml element?

    - by luke
    <ListView> <ListView.Resources> <DataTempalte x:Key="label"> <TextBlock Text="{Binding Label}"/> </DataTEmplate> <DataTemplate x:Key="editor"> <UserControl Content="{Binding Control.content}"/> <!-- This is the line --> </DataTemplate> </ListView.Resources> <ListView.View> <GridView> <GridViewColumn Header="Name" CellTemplate="{StaticResource label}"/> <GridViewColumn Header="Value" CellTemplate="{StaticResource editor}"/> </GridView> </ListView.View> On the marketed line, I'm replacing the contents of a UserControl with the contents of another UserControl that is dynamically created in code. I'd like to replace the entire control, and not just the content. Is there a way to do this?

    Read the article

  • KeyUp processed for wrong control

    - by Mikael
    I have made a simple test application for the issue, two winforms each containing a button. The button on the first form opens the other form when clicked. It also subscribes to keyup events. The second form has its button set as "AcceptButton" and in the Clicked event we sleep for 1s and then set DialogResult to true (the sleep is to simulate some processing done) When enter is used to close this second form the KeyUp event of the button on the first form is triggered, even though the key was released well before the second had passed so the second form was still shown and focused. If any key other then enter is pressed in the second form the event is not triggered for the button on the first form. First form: public Form1() { InitializeComponent(); buttonForm2.KeyUp += new KeyEventHandler(cntKeyUp); } void cntKeyUp(object sender, KeyEventArgs e) { MessageBox.Show(e.KeyCode.ToString()); } private void buttonForm2_Click(object sender, EventArgs e) { using (Form2 f = new Form2()) { f.ShowDialog(); } } Second form: private void button1_Click(object sender, EventArgs e) { Thread.Sleep(1000); this.DialogResult = DialogResult.OK; } Does anyone know why the event is triggered for the button on the non active form and what can be done to stop this from happening?

    Read the article

  • SCD2 + Merge Statement + SQL Server

    - by Nev_Rahd
    I am trying work out with MERGE statment to Insert / Update Dimension Table of Type SCD2 My source is a Table var to Merge with Dimension table. My MERGE statement is throwing an error as: The target table 'DM.DATA_ERROR.ERROR_DIMENSION' of the INSERT statement cannot be on either side of a (primary key, foreign key) relationship when the FROM clause contains a nested INSERT, UPDATE, DELETE, or MERGE statement. Found reference constraint 'FK_ERROR_DIMENSION_to_AUDIT_CreatedBy'. My MERGE Statement: DECLARE @DATAERROROBJECT AS [ERROR_DIMENSION] INSERT INTO DM.DATA_ERROR.ERROR_DIMENSION SELECT ERROR_CODE, DATA_STREAM_ID, [ERROR_SEVERITY], DATA_QUALITY_RATING, ERROR_LONG_DESCRIPTION, ERROR_DESCRIPTION, VALIDATION_RULE, ERROR_TYPE, ERROR_CLASS, VALID_FROM, VALID_TO, CURR_FLAG, CREATED_BY_AUDIT_SK, UPDATED_BY_AUDIT_SK FROM (MERGE DM.DATA_ERROR.ERROR_DIMENSION ED USING @DATAERROROBJECT OBJ ON(ED.ERROR_CODE = OBJ.ERROR_CODE AND ED.DATA_STREAM_ID = OBJ.DATA_STREAM_ID) WHEN NOT MATCHED THEN INSERT VALUES( OBJ.ERROR_CODE ,OBJ.DATA_STREAM_ID ,OBJ.[ERROR_SEVERITY] ,OBJ.DATA_QUALITY_RATING ,OBJ.ERROR_LONG_DESCRIPTION ,OBJ.ERROR_DESCRIPTION ,OBJ.VALIDATION_RULE ,OBJ.ERROR_TYPE ,OBJ.ERROR_CLASS ,GETDATE() ,'9999-12-13' ,'Y' ,1 ,1 ) WHEN MATCHED AND ED.CURR_FLAG = 'Y' AND ( ED.[ERROR_SEVERITY] <> OBJ.[ERROR_SEVERITY] OR ED.[DATA_QUALITY_RATING] <> OBJ.[DATA_QUALITY_RATING] OR ED.[ERROR_LONG_DESCRIPTION] <> OBJ.[ERROR_LONG_DESCRIPTION] OR ED.[ERROR_DESCRIPTION] <> OBJ.[ERROR_DESCRIPTION] OR ED.[VALIDATION_RULE] <> OBJ.[VALIDATION_RULE] OR ED.[ERROR_TYPE] <> OBJ.[ERROR_TYPE] OR ED.[ERROR_CLASS] <> OBJ.[ERROR_CLASS] ) THEN UPDATE SET ED.CURR_FLAG = 'N', ED.VALID_TO = GETDATE() OUTPUT $ACTION ACTION_OUT, OBJ.ERROR_CODE ERROR_CODE, OBJ.DATA_STREAM_ID DATA_STREAM_ID, OBJ.[ERROR_SEVERITY] [ERROR_SEVERITY], OBJ.DATA_QUALITY_RATING DATA_QUALITY_RATING, OBJ.ERROR_LONG_DESCRIPTION ERROR_LONG_DESCRIPTION, OBJ.ERROR_DESCRIPTION ERROR_DESCRIPTION, OBJ.VALIDATION_RULE VALIDATION_RULE, OBJ.ERROR_TYPE ERROR_TYPE, OBJ.ERROR_CLASS ERROR_CLASS, GETDATE() VALID_FROM, '9999-12-31' VALID_TO, 'Y' CURR_FLAG, 555 CREATED_BY_AUDIT_SK, 555 UPDATED_BY_AUDIT_SK ) AS MERGE_OUT WHERE MERGE_OUT.ACTION_OUT = 'UPDATE'; What am I doing wrong ?

    Read the article

  • Excel Automation Addin UDFs not accesible

    - by Eric
    I created the following automation addin: namespace AutomationAddin { [Guid("6652EC43-B48C-428a-A32A-5F2E89B9F305")] [ClassInterface(ClassInterfaceType.AutoDual)] [ComVisible(true)] public class MyFunctions { public MyFunctions() { } #region UDFs public string ToUpperCase(string input) { return input.ToUpper(); } #endregion [ComRegisterFunctionAttribute] public static void RegisterFunction(Type type) { Registry.ClassesRoot.CreateSubKey( GetSubKeyName(type, "Programmable")); RegistryKey key = Registry.ClassesRoot.OpenSubKey( GetSubKeyName(type, "InprocServer32"), true); key.SetValue("", System.Environment.SystemDirectory + @"\mscoree.dll", RegistryValueKind.String); } [ComUnregisterFunctionAttribute] public static void UnregisterFunction(Type type) { Registry.ClassesRoot.DeleteSubKey( GetSubKeyName(type, "Programmable"), false); } private static string GetSubKeyName(Type type, string subKeyName) { System.Text.StringBuilder s = new System.Text.StringBuilder(); s.Append(@"CLSID\{"); s.Append(type.GUID.ToString().ToUpper()); s.Append(@"}\"); s.Append(subKeyName); return s.ToString(); } } } I build it and it registers just fine. I open excel 2003, go to tools-Add-ins, click on the automation button and the addin appears in the list. I add it and it shows up in the addins list. but, the functions themselves don't appear. If I type it in it doesn't work and if I look in the function wizard, my addin doesn't show up as a category and the functions are not in the list. I am using excel 2003 on windows 7 x86. I built the project with visual studio 2010. This addin worked fine on windows xp built with visual studio 2008.

    Read the article

  • Is there a production ready web application framework in Python?

    - by peperg
    I heard lots of good opinions about Python language. They say it's mature, expressive etc... Are there any production-ready web application frameworks in Python. By "production ready" I mean : supports objective-relational mapping with caching and declarative desciption (like JPA, Hibernate etc..) controls oriented user interface support - no HTML templates but something like JSF (RichFaces, Icefaces) or GWT, Vaadin, ZK component decomposition and dependency injection (like EJB or Spring) unit and integration testing good IDE support clustering, modularity etc (like Terracota, OSGi etc..) there are successful applications written in it by companies like IBM, Oracle etc (I mean real business applications not Twitter) could have commercial support Is it possible at all in Python world ? Or only choices are : use Python and write everything from the bottom (too expensice) stick to JEE buy .NET stack

    Read the article

  • Is this the right way to organize my database tables?

    - by Moss
    So I'm making a website that allows users to build contact lists. So their are users, the users have lists, and the lists have contacts. It seems to me that I need 3 tables for this but I just want to make sure. There would be a User table of course, and then a "List of Lists" table that has the username, and listname, as primary key along with whatever other info we want to attach to the lists as a whole. Finally, for lack of a better word, the List table which would again have the username/listname p.k., then the contact ID and notes and such that the user attaches to that contact on that specific list. I hope that is a clear explanation. For some reason I feel unsure about this arrangement. For one thing if the website becomes popular the List table could swell to billions of rows. And it also feels a little weird that everybody's list info is all jumbled up in the same table. I suppose I could create separate tables for each user and even for each list but that seems like a bad idea for other reasons. My db explanation assumes I can use foreign keys on my tables which at the moment isn't actually an option. If I can't get InnoDB tables enabled I will probably use ID's for the lists instead of depending on a compound key. Maybe I should do this anyway?

    Read the article

  • How do you efficiently bulk index lookups?

    - by Liron Shapira
    I have these entity kinds: Molecule Atom MoleculeAtom Given a list(molecule_ids) whose lengths is in the hundreds, I need to get a dict of the form {molecule_id: list(atom_ids)}. Likewise, given a list(atom_ids) whose length is in the hunreds, I need to get a dict of the form {atom_id: list(molecule_ids)}. Both of these bulk lookups need to happen really fast. Right now I'm doing something like: atom_ids_by_molecule_id = {} for molecule_id in molecule_ids: moleculeatoms = MoleculeAtom.all().filter('molecule =', db.Key.from_path('molecule', molecule_id)).fetch(1000) atom_ids_by_molecule_id[molecule_id] = [ MoleculeAtom.atom.get_value_for_datastore(ma).id() for ma in moleculeatoms ] Like I said, len(molecule_ids) is in the hundreds. I need to do this kind of bulk index lookup on almost every single request, and I need it to be FAST, and right now it's too slow. Ideas: Will using a Molecule.atoms ListProperty do what I need? Consider that I am storing additional data on the MoleculeAtom node, and remember it's equally important for me to do the lookup in the molecule-atom and atom-molecule directions. Caching? I tried memcaching lists of atom IDs keyed by molecule ID, but I have tons of atoms and molecules, and the cache can't fit it. How about denormalizing the data by creating a new entity kind whose key name is a molecule ID and whose value is a list of atom IDs? The idea is, calling db.get on 500 keys is probably faster than looping through 500 fetches with filters, right?

    Read the article

  • Can this code be further optimized??

    - by kaki
    i understand that the code given below will not be compltely understood unless i explain my whole of previous and next lines of code. But this is part of the code which is causing so much of delay in my project and want to optimize this. i want to know which code part is faulty and how could this be replaced. i guess,few can say that use of this function is heavy compared and other ligher method are available to do this work please help, thanks in advance for i in range(len(lists)): save=database_index[lists[i]] #print save #if save[1]!='text0194'and save[1]!='text0526': using_data[save[0]]=save p=os.path.join("c:/begpython/wavnk/",str(str(str(save[1]).replace('phone','text'))+'.pm')) x1=open(p , 'r') x2=open(p ,'r') for i in range(6): x1.readline() x2.readline() gen = (float(line.partition(' ')[0]) for line in x1) r= min(enumerate(gen), key=lambda x: abs(x[1] - float(save[4]))) #print r[0] a1=linecache.getline(str(str(p).replace('.pm','.mcep')), (r[0]+1)) #print a1 p1=str(str(a1).rstrip('\n')).split(' ') #print p1 join_cost_index_end[save[0]]=p1 #print join_cost_index_end gen = (float(line.partition(' ')[0]) for line in x2) r= min(enumerate(gen), key=lambda x: abs(x[1] - float(save[3]))) #print r[0] a2=linecache.getline(str(str(p).replace('.pm','.mcep')), (r[0]+1)) #print a2 p2=str(str(a2).rstrip('\n')).split(' ') #print p2 join_cost_index_strt[save[0]]=p2 #print join_cost_index_strt j=j+1 #print j #print join_cost_index_end #print join_cost_index_strt enter code here here my database_index has about 2,50,000 entries`

    Read the article

  • How should I design my MYSQL table/s?

    - by yaya3
    I built a really basic php/mysql site for an architect that uses one 'projects' table. The website showcases various projects that he has worked on. Each project contained one piece of text and one series of images. Original projects table (create syntax): CREATE TABLE `projects` ( `project_id` int(11) NOT NULL auto_increment, `project_name` text, `project_text` text, `image_filenames` text, `image_folder` text, `project_pdf` text, PRIMARY KEY (`project_id`) ) ENGINE=MyISAM AUTO_INCREMENT=8 DEFAULT CHARSET=latin1; The client now requires the following, and I'm not sure how to handle the expansions in my DB. My suspicion is that I will need an additional table. Each project now have 'pages'. Pages either contain... One image One "piece" of text One image and one piece of text. Each page could use one of three layouts. As each project does not currently have more than 4 pieces of text (a very risky assumption) I have expanded the original table to accommodate everything. New projects table attempt (create syntax): CREATE TABLE `projects` ( `project_id` int(11) NOT NULL AUTO_INCREMENT, `project_name` text, `project_pdf` text, `project_image_folder` text, `project_img_filenames` text, `pages_with_text` text, `pages_without_img` text, `pages_layout_type` text, `pages_title` text, `page_text_a` text, `page_text_b` text, `page_text_c` text, `page_text_d` text, PRIMARY KEY (`project_id`) ) ENGINE=MyISAM AUTO_INCREMENT=8 DEFAULT CHARSET=latin1; In trying to learn more about MYSQL table structuring I have just read an intro to normalization and A Simple Guide to Five Normal Forms in Relational Database Theory. I'm going to keep reading! Thanks in advance

    Read the article

  • am i returning the correct values?

    - by phill
    I wrote the following code: import java.lang.*; import DB.*; private Boolean validateInvoice(String i) { int count = 0; try { //check how many rowsets ResultSet c = connection.DBquery("select count(*) from Invce i,cust c where tranid like '"+i+"' and i.key = c.key "); while (c.next()) { System.out.println("rowcount : " + c.getInt(1)); count = c.getInt(1); } if (count > 0 ) { return TRUE; } else { return FALSE; } //end if } catch(Exception e){e.printStackTrace();return FALSE;} } The errors I'm getting are: i.java:195: cannot find symbol symbol : variable TRUE location: class changei.iTable return TRUE; i.java:197: cannot find symbol symbol : variable TRUE location: class changei.iTable return FALSE; i.java:201:: cannot find symbol symbol : variable FALSE location: class changei.iTable catch(Exception e){e.printStackTrace();return FALSE;} The Connection class comes from the DB package i created. Is the return TRUE/FALSE correct since the function is a Boolean return type?

    Read the article

  • Avoid implicit conversion from date to timestamp for selects with Oracle using Hibernate

    - by sapporo
    I'm using Hibernate 3.2.7.GA criteria queries to select rows from an Oracle Enterprise Edition 10.2.0.4.0 database, filtering by a timestamp field. The field in question is of type java.util.Date in Java, and DATE in Oracle. It turns out that the field gets mapped to java.sql.Timestamp, and Oracle converts all rows to TIMESTAMP before comparing to the passed in value, bypassing the index and thereby ruining performance. One solution would be to use Hibernate's sqlRestriction() along with Oracle's TO_DATE function. That would fix performance, but requires rewriting the application code (lots of queries). So is there a more elegant solution? Since Hibernate already does type mapping, could it be configured to do the right thing? Update: The problem occurs in a variety of configurations, but here's one specific example: Oracle Enterprise Edition 10.2.0.4.0 Oracle JDBC Driver 11.1.0.7.0 Hibernate 3.2.7.GA Hibernate's Oracle10gDialect Java 1.6.0_16

    Read the article

  • Mysql InnoDB performance optimization and indexing

    - by Davide C
    Hello everybody, I have 2 databases and I need to link information between two big tables (more than 3M entries each, continuously growing). The 1st database has a table 'pages' that stores various information about web pages, and includes the URL of each one. The column 'URL' is a varchar(512) and has no index. The 2nd database has a table 'urlHops' defined as: CREATE TABLE urlHops ( dest varchar(512) NOT NULL, src varchar(512) DEFAULT NULL, timestamp timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, KEY dest_key (dest), KEY src_key (src) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Now, I need basically to issue (efficiently) queries like this: select p.id,p.URL from db1.pages p, db2.urlHops u where u.src=p.URL and u.dest=? At first, I thought to add an index on pages(URL). But it's a very long column, and I already issue a lot of INSERTs and UPDATEs on the same table (way more than the number of SELECTs I would do using this index). Other possible solutions I thought are: -adding a column to pages, storing the md5 hash of the URL and indexing it; this way I could do queries using the md5 of the URL, with the advantage of an index on a smaller column. -adding another table that contains only page id and page URL, indexing both columns. But this is maybe a waste of space, having only the advantage of not slowing down the inserts and updates I execute on 'pages'. I don't want to slow down the inserts and updates, but at the same time I would be able to do the queries on the URL efficiently. Any advice? My primary concern is performance; if needed, wasting some disk space is not a problem. Thank you, regards Davide

    Read the article

  • IStatelessSession insert object with many-to-one

    - by Andrew Kalashnikov
    Hello guys. I've got common mapping <class name="NotSyncPrice, Portal.Core" table='Not_sync_price'> <id name="Id" unsaved-value="0"> <column name="id" not-null="true"/> <generator class="native"/> </id> <many-to-one name="City" class="Clients.Core.Domains.City, Clients.Core" column="city_id" cascade="none"></many-to-one> <!--<property name="City"> <column name="city_id"/> </property>--> I want to use IStatelessSession for batch insert. But when i set city object to NotSyncPrice object and call IStatelessSession I've got strange exception: NHibernate.Impl.StatelessSessionImpl.get_Timestamp() When its null or int all is ok. I try use real && proxy city object. But no result. What's wrong:( Please help

    Read the article

  • Foreach loop returning null values in PHP?

    - by Jascha
    Hello, I have a pretty simple problem. Basically I have an array called $list that is a list of titles. If I do a print_r($list) I get these results: Array ( [0] => Another New Title [1] => Awesome Movies and stuff [2] => Jascha's Title ) Now, I'm running a foreach loop to retrieve their values and format them in an <ul> like so... function get_film_list(){ global $categories; $list = $categories->get_film_list(); if(count($list)==0){ echo 'No films are in this category'; }else{ echo '<ul>'; foreach($list as $title){ echo '<li>' . $title . '<li>'; } echo '</ul>'; } } The problem I'm having is my loop is returning two values per value (is it the key value?) The result of the preceding function looks like this: Another New Title   Awesome Movies and stuff   Jascha's Title   I even tried: foreach($list as $key => $title){ echo '<li>' . $title . '<li>'; } With the same results: Another New Title   Awesome Movies and stuff   Jascha's Title   What am I missing here? Thanks in advance.

    Read the article

  • [EF 4 POCO] Problem with INSERT...

    - by Darmak
    Hi all, I'm so frustrated because of this problem, you have no idea... I have 2 classes: Post and Comment. I use EF 4 POCO support, I don't have foreign key columns in my .edmx model (Comment class doesn't have PostID property, but has Post property) class Comment { public Post post { get; set; } // ... } class Post { public virtual ICollection<Comment> Comments { get; set; } // ... } Can someone tell me why the code below doesn't work? I want to create a new comment for a post: Comment comm = context.CreateObject<Comment>(); Post post = context.Posts.Where(p => p.Slug == "something").SingleOrDefault(); // post != null, so don't worry, be happy // here I set all other comm properties and... comm.Post = post; context.AddObject("Comments", comm); // Exception here context.SaveChanges(); The Exception is: Cannot insert the value NULL into column 'PostID', table 'Blog.Comments'; column does not allow nulls. INSERT fails. ... this 'PostID' column is of course a foreign key to the Posts table. Any help will be appreciated!

    Read the article

  • core-plot barchart does not work!!

    - by user355068
    Hello~ I try to draw bar chart!! numberForPlot is not good work, only CPBarPlotFieldBarLength set. -(NSNumber *)numberForPlot: (CPPlot *)plot field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)index { NSDecimalNumber *num = nil; NSString *key = (fieldEnum == CPScatterPlotFieldX) ? @"x" : @"y"; if ( [plot isKindOfClass:[CPBarPlot class]] ) { switch ( fieldEnum ) { NSLog(@"fieldEnum = %d",fieldEnum); case CPBarPlotFieldBarLocation: num = (NSDecimalNumber *)[NSDecimalNumber numberWithUnsignedInteger:index]; NSLog(@"CPBarPlotFieldBarLocation return num = %@",num); break; case CPBarPlotFieldBarLength: num = (NSDecimalNumber *)[NSDecimalNumber numberWithUnsignedInteger: (index+1)*(index+1)]; if ( [plot.identifier isEqual:Plot3Identity] ) num = [[self.ExForPlot objectAtIndex:index] valueForKey:key]; NSLog(@"CPBarPlotFieldBarLength return num = %@",num); break; } } else { NSLog(@"...??.."); } return num; } # LOG 2010-06-01 02:43:19.424 myHealth[5071:207] CPBarPlotFieldBarLength return num = 0 2010-06-01 02:43:19.425 myHealth[5071:207] CPBarPlotFieldBarLength return num = 0 2010-06-01 02:43:19.425 myHealth[5071:207] CPBarPlotFieldBarLength return num = 30 # EXPlot set // Add some initial data NSMutableArray *contentArray = [NSMutableArray arrayWithCapacity:100]; int prevWeight = 0; int prevBmi = 0 ; if (sqlite3_prepare_v2(database, sql, -1, &statement, NULL) == SQLITE_OK) { while (sqlite3_step(statement)==SQLITE_ROW) { id x = [NSNumber numberWithInt:graphTimeStamp]; id y = [NSNumber numberWithInt:sqlite3_column_int(statement, 2)]; [contentArray addObject:[NSMutableDictionary dictionaryWithObjectsAndKeys:x, @"x", y, @"y", nil]]; } self.ExForPlot = contentArray;

    Read the article

  • Optimizing MySQL for ALTER TABLE of InnoDB

    - by schuilr
    Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config. I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit). The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option. What would be the ideal settings to get this task done as quick as possible?

    Read the article

  • Is testability alone justification for dependency injection?

    - by fearofawhackplanet
    The advantages of DI, as far as I am aware, are: Reduced Dependencies More Reusable Code More Testable Code More Readable Code Say I have a repository, OrderRepository, which acts as a repository for an Order object generated through a Linq to Sql dbml. I can't make my orders repository generic as it performs mapping between the Linq Order entity and my own Order POCO domain class. Since the OrderRepository by necessity is dependent on a specific Linq to Sql DataContext, parameter passing of the DataContext can't really be said to make the code reuseable or reduce dependencies in any meaningful way. It also makes the code harder to read, as to instantiate the repository I now need to write new OrdersRepository(new MyLinqDataContext()) which additionally is contrary to the main purpose of the repository, that being to abstract/hide the existence of the DataContext from consuming code. So in general I think this would be a pretty horrible design, but it would give the benefit of facilitating unit testing. Is this enough justification? Or is there a third way? I'd be very interested in hearing opinions.

    Read the article

  • Use only some parts of Django?

    - by Hanno Fietz
    I like Django, but for a particular application I would like to use only parts of it, but I'm not familiar enough with how Django works on the inside, so maybe someone can point me into the right direction as to what I have to check out. Specifically, I want to use: The models and database abstraction The caching API, although I want to avoid database lookups by caching, not HTML generation, and since the caching framework in Django is intended for the latter, I'm not sure yet whether that's really appropriate. I would not use: Templating urlconfigs Or, more exactly, I'm neither using HTTP nor HTML. So basically, I have a different input / output chain than usual. Can this work? My personal killer feature in Django is the Object / database mapping that I can do with the models, so if there's another technology (doesn't have to be Python, I'm in the design phase and I'm pretty agnostic about languages and platforms) that gives me the same abilities, that would be great, too.

    Read the article

  • Type conversion between PHP client and Java webservice

    - by a1ex07
    I have a web service implemented as EJB. One of it's methods returns Map<String,String>. On client side I use php : $client = new SoapClient($wsdl,array("cache_wsdl"=>WSDL_CACHE_NONE)); $result = $client->foo($params); Everything works fine, but I would like $result-return to be an associative array. Now it looks like array(10) { [0]=> object(stdClass)#46 (2) { ["key"]=> string(4) "key1" ["value"]=> string(4) "val1" } .... I want array(10) {"key1"=>"value1", "key2"=>"value2", .... } The obvious solution is to iterate through this array and create a new array $arr = array(); foreach ($result->return as $val) $arr[$val->key] = $val->value; But I wonder if there is a better way to get an assosicative array ? Thanks in advance.

    Read the article

  • Mapped Stored Procedures in EF 1

    - by michael.lukatchik
    All, I'm using mapped Stored Procedures in EF 1. I've completed the following steps: 1) I've created my INSERT, UPDATE, and DELETE queries in SQL Server. 2) I've built the EDMX and imported the INSERT, UPDATE, and DELETE sprocs as part of my model. 3) I've set up a Stored Procedure Mapping on a table inside of my EDMX file. The INSERT, UPDATE, and DELETE sprocs were mapped accordingly. Using this approach, I would expect to rebuild the application (and mine builds successfully) and then see the Stored Procedures as available function names via my EDMX object, such as: _entities.InsertComment(..), _entities.UpdateComment(..), and _entities.DeleteComment(..) Intellisense is not picking these names up and I can't figure out why. If I perform these same steps using EF4, the function names are automatically picked up by Intellisense after adding the Stored Procedure Mappings. Is this a bug in EF1? Is there something else I should be doing? Thanks in advance, Mike

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >