Search Results

Search found 19377 results on 776 pages for 'el key'.

Page 364/776 | < Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >

  • Core jQuery event modification problem

    - by DSKVR
    I am attempting to overwrite a core jQuery event, in this case the keydown event. My intention is to preventDefault() functionality of Left(37), Up(38), Right(39) and Down(40) to maintain the consistency of hot keys in my web application. I am using the solution provided here for the conditional charCode preventDefault problem. For some reason, my function overwrite is simply not firing, and I cannot put my finger on the problem. I am afraid that over the past 30 minutes this issue has resulted in some hair loss. Anybody have the remedy? /* Modify Keydown Event to prevent default PageDown and PageUp functionality */ (function(){ var charCodes = new Array(37,38,39,40); var original = jQuery.fn.keydown; jQuery.fn.keydown = function(e){ var key=e.charCode ? e.charCode : e.keyCode ? e.keyCode : 0; alert('now why is my keydown mod not firing?'); if($.inArray(key,charCodes)) { alert('is one of them, do not scroll page'); e.preventDefault(); return false; } original.apply( this, arguments ); } })();

    Read the article

  • Tree deletion with NHibernate

    - by Tigraine
    Hi, I'm struggling with a little problem and starting to arrive at the conclusion it's simply not possible. I have a Table called Group. As with most of these systems Group has a ParentGroup and a Children collection. So the Table Group looks like this: Group -ID (PK) -Name -ParentId (FK) I did my mappings using FNH AutoMappings, but I had to override the defaults for this: p.References(x => x.Parent) .Column("ParentId") .Cascade.All(); p.HasMany(x => x.Children) .KeyColumn("ParentId") .ForeignKeyCascadeOnDelete() .Cascade.AllDeleteOrphan() .Inverse(); Now, the general idea was to be able to delete a node and all of it's children to be deleted too by NH. So deleting the only root node should basically clear the whole table. I tried first with Cascade.AllDeleteOrphan but that works only for deletion of items from the Children collection, not deletion of the parent. Next I tried ForeignKeyCascadeOnDelete so the operation gets delegated to the Database through on delete cascade. But once I do that MSSql2008 does not allow me to create this constraint, failing with : Introducing FOREIGN KEY constraint 'FKBA21C18E87B9D9F7' on table 'Group' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Well, and that's it for me. I guess I'll just loop through the children and delete them one by one, thus doing a N+1. If anyone has a suggestion on how do that more elegantly I'd be eager to hear it.

    Read the article

  • named type not used for constructor injection

    - by nmarun
    Hi, I have a simple console application where I have the following setup: public interface ILogger { void Log(string message); } class NullLogger : ILogger { private readonly string version; public NullLogger() { version = "1.0"; } public NullLogger(string v) { version = v; } public void Log(string message) { Console.WriteLine("NULL " + version + " : " + message); } } The configuration details are below: <type type="UnityConsole.ILogger, UnityConsole" mapTo="UnityConsole.NullLogger, UnityConsole"> My calling code looks as below: IUnityContainer container = new UnityContainer(); UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); section.Containers.Default.Configure(container); ILogger nullLogger = container.Resolve(); nullLogger.Log("hello"); This works fine, but once I give a name to this type something like: <type type="UnityConsole.ILogger, UnityConsole" mapTo="UnityConsole.NullLogger, UnityConsole" name="NullLogger"> The above calling code does not work even if I explicitly register the type using container.RegisterType<ILogger, NullLogger>(); I get the error: {"Resolution of the dependency failed, type = \"UnityConsole.ILogger\", name = \"\". Exception message is: The current build operation (build key Build Key[UnityConsole.NullLogger, null]) failed: The parameter v could not be resolved when attempting to call constructor UnityConsole.NullLogger(System.String v). (Strategy type BuildPlanStrategy, index 3)"} Why doesn't unity look into named instances? To get it to work, I'll have to do: ILogger nullLogger = container.Resolve("NullLogger"); Where is this behavior documented? Arun

    Read the article

  • Applying correct bindings to WPF datatemplate to maximize reusability

    - by johncatfish
    Hi. I have a WPF application. I want to apply that datatemplate to a Listbox filled with records from Table02. Then, for each listboxitem I need to bind the combobox to the same database table (Table01), but for each listboxitem the selected item will vary. It will be a foreign key to Table01. <DataTemplate x:Key="Table01DataTemplate"> <Grid> <ComboBox ItemsSource="{Binding Model.IQueryable_Table01, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type Window}}}" SelectedValue="{Binding Table01_ForeignKey}" DisplayMemberPath="name" SelectedValuePath="id" /> <!-- Other stuff --> </Grid> </DataTemplate> <ListBox x:Name="lbTest" ItemTemplate="{StaticResource Table01DataTemplate}" /> <!-- In .cs file lbTest.DataContext = this; --> Notes: Model.IQueryable_Table01 is a property which encapsulates a Linq-to-sql call returning a IQueryable. lbTest will be filled by setting ItemsSource with a Linq-to-sql call. Is this a good way to do the bindings in a data template for a maximum reusability? I also thought of replacing AncestorType={x:Type Window} with lbTest.DataContext = this; AncestorType={x:Type Application} and lbTest.DataContext = App.Current; But it didn't work (Exception on loading) and I don't know if there's any caveats or down-sides to this approach. Is this good? Any suggestions or improvements? Thanks.

    Read the article

  • Binding Data into a Resource

    - by Jordan
    How do you bind data from the view model into an object in the resources of the user control? Here is a very abstract example: <UserControl ... xmlns:local="clr-namespace:My.Local.Namespace" Name="userControl"> <UserControl.Resources> <local:GroupingProvider x:Key="groupingProvider" GroupValue="{Binding ???}" /> </UserControl.Resources> <Grid> <local:GroupingConsumer Name="groupingConsumer1" Provider={StaticResource groupingProvider"} /> <local:GroupingConsumer Name="groupingConsumer2" Provider={StaticResource groupingProvider"} /> </Grid> </UserControl> How do I bind GroupValue to a property in the view model behind this view. I've tried the following: <local:GroupingProvider x:Key="groupingProvider" GroupValue="{Binding ElementName=userControl, Path=DataContext.Property}"/> But this doesn't work. Edit: GroupProvider extends DependencyObject and GroupValue is the name of a DependencyProperty. I'm getting an debugging message telling me that the property to which I am binding doesn't exist.

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • How Optimize sql query make it faster

    - by user502083
    Hello every one : I have a very simple small database, 2 of tables are: Node (Node_ID, Node_name, Node_Date) : Node_ID is primary key Citation (Origin_Id, Target_Id) : PRIMARY KEY (Origin_Id, Target_Id) each is FK in Node Now I write a query that first find all citations that their Origin_Id has a specific date and then I want to know what are the target dates of these records. I'm using sqlite in python the Node table has 3000 record and Citation has 9000 records, and my query is like this in a function: def cited_years_list(self, date): c=self.cur try: c.execute("""select n.Node_Date,count(*) from Node n INNER JOIN (select c.Origin_Id AS Origin_Id, c.Target_Id AS Target_Id, n.Node_Date AS Date from CITATION c INNER JOIN NODE n ON c.Origin_Id=n.Node_Id where CAST(n.Node_Date as INT)={0}) VW ON VW.Target_Id=n.Node_Id GROUP BY n.Node_Date;""".format(date)) cited_years=c.fetchall() self.conn.commit() print('Cited Years are : \n ',str(cited_years)) except Exception as e: print('Cited Years retrival failed ',e) return cited_years Then I call this function for some specific years, But it's crazy slowwwwwwwww :( (around 1 min for a specific year) Although my query works fine, it is slow. would you please give me a suggestion to make it faster? I'd appreciate any idea about optimizing this query :) I also should mention that I have indices on Origin_Id and Target_Id, so the inner join should be pretty fast, but it's not!!!

    Read the article

  • WPF CollectionViewSource Grouping

    - by Miles
    I'm using a Collection View Source to group my data. In my data, I have Property1 and Property2 that I'm needing to group on. The only stipulation is that I don't want sub-groups of another group. So, when I group by these two properties, I don't want to have it so that Property2 because a subgroup of Property1's group. The reason why I'm wanting this is because I'm wanting to have a header that shows the following information: Header: <TextBlock.Text> <MultiBinding StringFormat="Property1: {0}, Property2: {1}"> <Binding Path="Property1"/> <Binding Path="Property2"/> </MultiBinding> </TextBlock.Text> I've tried this with my CollectionViewSource but was not able to "combine" the group and subgroup together: <CollectionViewSource x:Key="myKey" Source="{Binding myDataSource}"> <CollectionViewSource.GroupDescriptions> <PropertyGroupDescription PropertyName="Property1" /> <PropertyGroupDescription PropertyName="Property2" /> </CollectionViewSource.GroupDescriptions> </CollectionViewSource> Is it possible to group two properties together? Something like below? <CollectionViewSource x:Key="myKey" Source="{Binding myDataSource}"> <CollectionViewSource.GroupDescriptions> <PropertyGroupDescription PropertyName="Property1,Property2" /> </CollectionViewSource.GroupDescriptions> </CollectionViewSource>

    Read the article

  • nhibernate mapping: delete collection, insert new collection with old IDs

    - by npeBeg
    my issue lokks similar to this one: (link) but i have one-to-many association: <set name="Fields" cascade="all-delete-orphan" lazy="false" inverse="true"> <key column="[TEMPLATE_ID]"></key> <one-to-many class="MyNamespace.Field, MyLibrary"/> </set> (i also tried to use ) this mapping is for Template object. this one and the Field object has their ID generators set to identity. so when i call session.Update for the Template object it works fine, well, almost: if the Field object has an Id number, UPDATE sql request is called, if the Id is 0, the INSERT is performed. But if i delete a Field object from the collection it has no effect for the Database. I found that if i also call session.Delete for this Field object, everything will be ok, but due to client-server architecture i don't know what to delete. so i decided to delete all the collection elements from the DB and call session.Update with a new collection. and i've got an issue: nhibernate performs the UPDATE operation for the Field objects that has non-zero Id, but they are removed from DB! maybe i should use some other Id generator or smth.. what is the best way to make nhibernate perform "delete all"/"insert all" routine for the collection?

    Read the article

  • I getting undefined using JSON in jQuery why?

    - by YoniGeek
    Im learning some JSON, Im trying to list some data about dogs from twitter...but I can't really present the data...I believe that the error is inside map-method...something I'm missing...thanks for yr help <body> <h1>U almost there!!</h1> <script src="jquery-1.7.1.js"> </script> <script> // PubSub (function( $ ) { var o = $( {} ); $.each({ trigger: 'publish', on: 'subscribe', off: 'unsubscribe' }, function( key, val ) { jQuery[val] = function() { o[key].apply( o, arguments ); }; }); })( jQuery ); $.getJSON('http://search.twitter.com/search.json?q=dogs&callback=?', function( info) { $.publish( 'twitter/info', info ); }); // ... $.subscribe( 'twitter/info', function( e, info ) { $('body').html( $.map( info, function( obj) { // <--- here it's error, something Im missing right? return '<li>' + obj.text + '</li>'; }).join('') ); }); </script> </body> </html>

    Read the article

  • iPhone --- 3DES Encryption returns "wrong" results?

    - by Jan Gressmann
    Hello fellow developers, I have some serious trouble with a CommonCrypto function. There are two existing applications for BlackBerry and Windows Mobile, both use Triple-DES encryption with ECB mode for data exchange. On either the encrypted results are the same. Now I want to implent the 3DES encryption into our iPhone application, so I went straight for CommonCrypto: http://www.opensource.apple.com/source/CommonCrypto/CommonCrypto-32207/CommonCrypto/CommonCryptor.h I get some results if I use CBC mode, but they do not correspond with the results of Java or C#. Anyway, I want to use ECB mode, but I don't get this working at all - there is a parameter error showing up... This is my call for the ECB mode... I stripped it a little bit: const void *vplainText; plainTextBufferSize = [@"Hello World!" length]; bufferPtrSize = (plainTextBufferSize + kCCBlockSize3DES) & ~(kCCBlockSize3DES - 1); plainText = (const void *) [@"Hello World!" UTF8String]; NSString *key = @"abcdeabcdeabcdeabcdeabcd"; ccStatus = CCCrypt(kCCEncrypt, kCCAlgorithm3DES, kCCOptionECBMode, key, kCCKeySize3DES, nil, // iv, not used with ECB plainText, plainTextBufferSize, (void *)bufferPtr, // output bufferPtrSize, &movedBytes); t is more or less the code from here: http://discussions.apple.com/thread.jspa?messageID=9017515 But as already mentioned, I get a parameter error each time... When I use kCCOptionPKCS7Padding instead of kCCOptionECBMode and set the same initialization vector in C# and my iPhone code, the iPhone gives me different results. Is there a mistake by getting my output from the bufferPtr? Currently I get the encrypted stuff this way: NSData *myData = [NSData dataWithBytes:(const void *)bufferPtr length:(NSUInteger)movedBytes]; result = [[NSString alloc] initWithData:myData encoding:NSISOLatin1StringEncoding]; It seems I almost tried every setting twice, different encodings and so on... where is my error?

    Read the article

  • Access to map data

    - by herzl shemuelian
    I have a complex map that defined typedef short short1 typedef short short2 typedef map<short1,short2> data_list; typedef map<string,list> table_list; I have a class that fill table_list class GroupingClass { table_list m_table_list; string Buildkey(OD e1){ string ostring; ostring+=string(e1.m_Date,sizeof(Date)); ostring+=string(e1.m_CT,sizeof(CT)); ostring+=string(e1.m_PT,sizeof(PT)); return ostring; } void operator() (const map<short1,short2>::value_type& myPair) { OptionsDefine e1=myPair.second; string key=Buildkey(e1); m_table_list[key][e1.m_short2]=e1.m_short2; } operator table_list() { return m_table_list; } }; and I use it by table_list TL2 GroupingClass gc; TL2=for_each(mapOD.begin(), mapOD.end(), gc); but when I try to access to internal map I have problems for example data_list tmp; tmp=TL2["AAAA"]; short i=tmp[1]; //I dont update i variable but if i use a loop by itrator this work properly why this no work at first way thanks herzl

    Read the article

  • Generating authentication header from azure table through objective-c

    - by user923370
    I'm fetching data from iCloud and for that I need to generate a header (azure table storage). I used the code below for that and it is generating the headers. But when I use these headers in my project it is showing "make sure that the value of authorization header is formed correctly including the signature." I googled a lot and tried many codes but in vain. Can anyone kindly please help me with where I'm going wrong in this code. -(id)generat{ NSString *messageToSign = [NSString stringWithFormat:@"%@/%@/%@", dateString,AZURE_ACCOUNT_NAME, tableName]; NSString *key = @"asasasasasasasasasasasasasasasasasasasasas=="; const char *cKey = [key cStringUsingEncoding:NSUTF8StringEncoding]; const char *cData = [messageToSign cStringUsingEncoding:NSUTF8StringEncoding]; unsigned char cHMAC[CC_SHA256_DIGEST_LENGTH]; CCHmac(kCCHmacAlgSHA256, cKey, strlen(cKey), cData, strlen(cData), cHMAC); NSData *HMAC = [[NSData alloc] initWithBytes:cHMAC length:sizeof(cHMAC)]; NSString *hash = [Base64 encode:HMAC]; NSLog(@"Encoded hash: %@", hash); NSURL *url=[NSURL URLWithString: @"http://my url"]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:url]; [request addValue:[NSString stringWithFormat:@"SharedKeyLite %@:%@",AZURE_ACCOUNT_NAME, hash] forHTTPHeaderField:@"Authorization"]; [request addValue:dateString forHTTPHeaderField:@"x-ms-date"]; [request addValue:@"application/atom+xml, application/xml"forHTTPHeaderField:@"Accept"]; [request addValue:@"UTF-8" forHTTPHeaderField:@"Accept-Charset"]; NSLog(@"Headers: %@", [request allHTTPHeaderFields]); NSLog(@"URL: %@", [[request URL] absoluteString]); return request; } -(NSString*)rfc1123String:(NSDate *)date { static NSDateFormatter *df = nil; if(df == nil) { df = [[NSDateFormatter alloc] init]; df.locale = [[[NSLocale alloc] initWithLocaleIdentifier:@"en_US"] autorelease]; df.timeZone = [NSTimeZone timeZoneWithAbbreviation:@"GMT"]; df.dateFormat = @"EEE',' dd MMM yyyy HH':'mm':'ss 'GMT'"; } return [df stringFromDate:date]; }

    Read the article

  • MySQL query puzzle - finding what WOULD have been the most recent date

    - by Hank
    I've looked all over and haven't yet found an intelligent way to handle this, though I feel sure one is possible: One table of historical data has quarterly information: CREATE TABLE Quarterly ( unique_ID INT UNSIGNED NOT NULL, date_posted DATE NOT NULL, datasource TINYINT UNSIGNED NOT NULL, data FLOAT NOT NULL, PRIMARY KEY (unique_ID)); Another table of historical data (which is very large) contains daily information: CREATE TABLE Daily ( unique_ID INT UNSIGNED NOT NULL, date_posted DATE NOT NULL, datasource TINYINT UNSIGNED NOT NULL, data FLOAT NOT NULL, qtr_ID INT UNSIGNED, PRIMARY KEY (unique_ID)); The qtr_ID field is not part of the feed of daily data that populated the database - instead, I need to retroactively populate the qtr_ID field in the Daily table with the Quarterly.unique_ID row ID, using what would have been the most recent quarterly data on that Daily.date_posted for that data source. For example, if the quarterly data is 101 2009-03-31 1 4.5 102 2009-06-30 1 4.4 103 2009-03-31 2 7.6 104 2009-06-30 2 7.7 105 2009-09-30 1 4.7 and the daily data is 1001 2009-07-14 1 3.5 ?? 1002 2009-07-15 1 3.4 && 1003 2009-07-14 2 2.3 ^^ then we would want the ?? qtr_ID field to be assigned '102' as the most recent quarter for that data source on that date, and && would also be '102', and ^^ would be '104'. The challenges include that both tables (particularly the daily table) are actually very large, they can't be normalized to get rid of the repetitive dates or otherwise optimized, and for certain daily entries there is no preceding quarterly entry. I have tried a variety of joins, using datediff (where the challenge is finding the minimum value of datediff greater than zero), and other attempts but nothing is working for me - usually my syntax is breaking somewhere. Any ideas welcome - I'll execute any basic ideas or concepts and report back.

    Read the article

  • iPhone Simulator - Restores Plist - Weird Issue

    - by David Schiefer
    Hi All, today I've come across a rather weird issue. After adding code to validate a key (i added it below), the Simulator refuses to let go of the old plist file. I deleted the simulator folder in the Application Support folder, then deleted the *build directory and restarted xcode & build & run my app...still the same issue. the old plist is still there and 100% identical. I then changed the identifier and the snippet's validation keys, the plist however stayed the same. basically, no matter what i do it won't go. the same thing happens on the iphone itself. I have checked through the code, i don't create the key anywhere, but it still returns YES for it at every restart. Here's the code I added: + (void)initialize{ ////////////////////////////SPECIFING THE PREDATA/////////////////// NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; NSDictionary *appDefaults = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:NO] forKey:@"protect"]; [defaults registerDefaults:appDefaults]; } if ([[NSUserDefaults standardUserDefaults] boolForKey:@"protect"] == NO) { [navigationController pushViewController:help animated:YES]; [help.navigationItem hidesBackButton]; } else { [window addSubview:passcode.view]; [self performSelector:@selector(responder) withObject:nil afterDelay:1]; } as a result, it will always go for the else option, which for some reason, doesn't get executed either. I assumed an error but the log is empty and there's no crash.

    Read the article

  • PHP modifying and combining array

    - by Industrial
    Hi everyone, I have a bit of an array headache going on. The function does what I want, but since I am not yet to well acquainted with PHP:s array/looping functions, so thereby my question is if there's any part of this function that could be improved from a performance-wise perspective? I tried to be as complete as possible in my descriptions in each stage of the functions which shortly described prefixes all keys in an array, fill up eventual empty/non-valid keys with '' and removes the prefixes before returning the array: $var = myFunction ( array('key1', 'key2', 'key3', '111') ); function myFunction ($keys) { $prefix = 'prefix_'; $keyCount = count($keys); // Prefix each key and remove old keys for($i=0;$i<$keyCount; $i++){ $keys[] = $prefix.$keys[$i]; unset($keys[$i]); } // output: array('prefix_key1', 'prefix_key2', 'prefix_key3', '111) // Get all keys from memcached. Only returns valid keys $items = $this->memcache->get($keys); // output: array('prefix_key1' => 'value1', 'prefix_key2' => 'value2', 'prefix_key3'=>'value3) // note: key 111 was not found in memcache. // Fill upp eventual keys that are not valid/empty from memcache $return = $items + array_fill_keys($keys, ''); // output: array('prefix_key1' => 'value1', 'prefix_key2' => 'value2', 'prefix_key3'=>'value3, 'prefix_111' => '') // Remove the prefixes for each result before returning array to application foreach ($return as $k => $v) { $expl = explode($prefix, $k); $return[$expl[1]] = $v; unset($return[$k]); } // output: array('key1' => 'value1', 'key2' => 'value2', 'key3'=>'value3, '111' => '') return $return; } Thanks a lot!

    Read the article

  • Database cache that I'm not aware of?

    - by Martin
    I'm using asp.net mvc, linq2sql, iis7 and sqlserver express 2008. I get these intermittent server errors, primary key conflicts on insertion. I'm using a different setup on my development computer so I can't debug. After a while they go away. Restarting iis helps. I'm getting the feeling there is cache somewhere that I'm not aware of. Can somebody help me sort out these errors? Cannot insert duplicate key row in object 'dbo.EnquiryType' with unique index 'IX_EnquiryType'. Edits regarding Venemos answer Is it possible that another application is also accessing the same database simultaneously? Yes there is, but not this particular table and no inserts or updates. There is one other table with which I experience the same problem but it has to do with a different part of the model. How often an in what context do you create a new DataContext instance? Only once, using the singleton pattern. Are the primary keys generated by the database or by the application? Database. Which version of ASP.NET MVC and which version of .NET are you using? RC2 and 3.5.

    Read the article

  • SQLAlchemy Mapping problem

    - by asdvalkn
    Dear Everyone, I am trying to sqlalchemy to correctly map my data. Note that a unified group is basically a group of groups. (One unifiedGroup maps to many groups but each group can only map to one ug). So basically this is the definition of my unifiedGroups: CREATE TABLE `unifiedGroups` ( `ugID` INT AUTO_INCREMENT, `gID` INT NOT NULL, PRIMARY KEY(`ugID`, `gID`), KEY( `gID`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 ; Note that each row is a ugID, gID tuple. ( I do not know before hand how many gID is per ugID so this is probably the most sensible and simplest method). Definition for my UnifiedGroup class class UnifiedGroup(object): """UnifiedProduct behaves very much like a group """ def __init__(self, ugID): self.ugID=ugID #Added by mapping self.groups=False def __str__(self): return '<%s:%s>' % (self.ugID, ','.join( [g for g in self.groups])) These are my mapping tables: tb_groupsInfo = Table( 'groupsInfo', metadata, Column('gID', Integer, primary_key=True), Column('gName', String(128)), ) tb_unifiedGroups = Table( 'unifiedGroups', metadata, Column('ugID', Integer, primary_key=True), Column('gID', Integer, ForeignKey('groupsInfo.gID')), ) My mapper maps in the following manner: mapper( UnifiedGroup, tb_unifiedGroups, properties={ 'groups': relation( Group, backref='unifiedGroup') }) However, when I tried to do groupInstance.unifiedGroup, I am getting an empty list [], while groupInstance.unifiedGroup.groups returns me an error: AttributeError: 'InstrumentedList' object has no attribute 'groups' Traceback (most recent call last): File "Mapping.py", line 119, in <module> print p.group.unifiedGroup.groups AttributeError: 'InstrumentedList' object has no attribute 'groups' What is wrong?

    Read the article

  • FluentNHibernate: Not.Nullable() doesn't affect output schema

    - by alex
    Hello I'm using fluent nhibernate v. 1.0.0.595. There is a class: public class Weight { public virtual int Id { get; set; } public virtual double Value { get; set; } } I want to map it on the following table: create table [Weight] ( WeightId INT IDENTITY NOT NULL, Weight DOUBLE not null, primary key (WeightId) ) Here is the map: public class WeightMap : ClassMap<Weight> { public WeightMap() { Table("[Weight]"); Id(x => x.Id, "WeightId"); Map(x => x.Value, "Weight").Not.Nullable(); } } The problem is that this mapping produces table with nullable Weight column: Weight DOUBLE null Not-nullable column is generated only with default convention for column name (i.e. Map(x = x.Value).Not.Nullable() instead of Map(x = x.Value, "Weight").Not.Nullable()), but in this case there will be Value column instead of Weight: create table [Weight] ( WeightId INT IDENTITY NOT NULL, Value DOUBLE not null, primary key (WeightId) ) I found similiar problem here: http://code.google.com/p/fluent-nhibernate/issues/detail?id=121, but seems like mentioned workaround with SetAttributeOnColumnElement("not-null", "true") is outdated. Does anybody encountered with this problem? Is there a way to specify named column as not-nullable?

    Read the article

  • Issue with creating index organized table

    - by mtim
    I'm having a weird problem with index organized table. I'm running Oracle 11g standard. i have a table src_table SQL> desc src_table; Name Null? Type --------------- -------- ---------------------------- ID NOT NULL NUMBER(16) HASH NOT NULL NUMBER(3) ........ SQL> select count(*) from src_table; COUNT(*) ---------- 21108244 now let's create another table and copy 2 columns from src_table set timing on SQL> create table dest_table(id number(16), hash number(20), type number(1)); Table created. Elapsed: 00:00:00.01 SQL> insert /*+ APPEND */ into dest_table (id,hash,type) select id, hash, 1 from src_table; 21108244 rows created. Elapsed: 00:00:15.25 SQL> ALTER TABLE dest_table ADD ( CONSTRAINT dest_table_pk PRIMARY KEY (HASH, id, TYPE)); Table altered. Elapsed: 00:01:17.35 It took Oracle < 2 min. now same exercise but with IOT table SQL> CREATE TABLE dest_table_iot ( id NUMBER(16) NOT NULL, hash NUMBER(20) NOT NULL, type NUMBER(1) NOT NULL, CONSTRAINT dest_table_iot_PK PRIMARY KEY (HASH, id, TYPE) ) ORGANIZATION INDEX; Table created. Elapsed: 00:00:00.03 SQL> INSERT /*+ APPEND */ INTO dest_table_iot (HASH,id,TYPE) SELECT HASH, id, 1 FROM src_table; "insert" into IOT takes 18 hours !!! I have tried it on 2 different instances of Oracle running on win and linux and got same results. What is going on here ? Why is it taking so long ?

    Read the article

  • Why does my jQuery/YQL call not return anything?

    - by tastyapple
    I'm trying to access YQL with jQuery but am not getting a response: http://jsfiddle.net/tastyapple/grMb3/ Anyone know why? $(function(){ $.extend( { _prepareYQLQuery: function (query, params) { $.each( params, function (key) { var name = "#{" + key + "}"; var value = $.trim(this); if (!value.match(/^[0-9]+$/)) { value = '"' + value + '"'; } query = query.replace(name, value); } ); return query; }, yql: function (query) { var $self = this; var successCallback = null; var errorCallback = null; if (typeof arguments[1] == 'object') { query = $self._prepareYQLQuery(query, arguments[1]); successCallback = arguments[2]; errorCallback = arguments[3]; } else if (typeof arguments[1] == 'function') { successCallback = arguments[1]; errorCallback = arguments[2]; } var doAsynchronously = successCallback != null; var yqlJson = { url: "http://query.yahooapis.com/v1/public/yql", dataType: "jsonp", success: successCallback, async: doAsynchronously, data: { q: query, format: "json", env: 'store://datatables.org/alltableswithkeys', callback: "?" } } if (errorCallback) { yqlJson.error = errorCallback; } $.ajax(yqlJson); return $self.toReturn; } } ); $.yql( "SELECT * FROM github.repo WHERE id='#{username}' AND repo='#{repository}'", { username: "jquery", repository: "jquery" }, function (data) { if (data.results.repository["open-issues"].content > 0) { alert("Hey dude, you should check out your new issues!"); } } ); });

    Read the article

  • Fluent NHibernate AutoMap

    - by Markus
    Hi. I have a qouestion regarding the AutoMap xml generation. I have two classes: public class User { virtual public Guid Id { get; private set; } virtual public String Name { get; set; } virtual public String Email { get; set; } virtual public String Password { get; set; } virtual public IList<OpenID> OpenIDs { get; set; } } public class OpenID { virtual public Guid Id { get; private set; } virtual public String Provider { get; set; } virtual public String Ticket { get; set; } virtual public User User { get; set; } } The generated sequences of xml files are: For User class: <bag name="OpenIDs"> <key> <column name="User_Id" /> </key> <one-to-many class="BL_DAL.Entities.OpenID, BL_DAL, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </bag> For OpenID class: <many-to-one class="BL_DAL.Entities.User, BL_DAL, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="User"> <column name="User_id" /> </many-to-one> I don't see the inverse=true attribute for the User mapping. Is it a normal behavior, or I made a mistake somewhere?

    Read the article

  • SQLAlchemy, one to many vs many to one

    - by sadvaw
    Dear Everyone, I have the following data: CREATE TABLE `groups` ( `bookID` INT NOT NULL, `groupID` INT NOT NULL, PRIMARY KEY(`bookID`), KEY( `groupID`) ); and a book table which basically has books( bookID, name, ... ), but WITHOUT groupID. There is no way for me to determine what the groupID is at the time of the insert for books. I want to do this in sqlalchemy. Hence I tried mapping Book to the books joined with groups on book.bookID=groups.bookID. I made the following: tb_groups = Table( 'groups', metadata, Column('bookID', Integer, ForeignKey('books.bookID'), primary_key=True ), Column('groupID', Integer), ) tb_books = Table( 'books', metadata, Column('bookID', Integer, primary_key=True), tb_joinedBookGroup = sql.join( tb_books, tb_groups, \ tb_books.c.bookID == tb_groups.c.bookID) and defined the following mapper: mapper( Group, tb_groups, properties={ 'books': relation(Book, backref='group') }) mapper( Book, tb_joinedBookGroup ) ... However, when I execute this piece of code, I realized that each book object has a field groups, which is a list, and each group object has books field which is a singular assigment. I think my definition here must have been causing sqlalchemy to be confused about the many-to-one vs one-to-many relationship. Can someone help me sort this out? My desired goal is g.books = [b, b, b, .. ] book.group = g, where g is an instance of group, and b is an instance of book

    Read the article

  • How can I get git to work with a remote server?

    - by Adrienne
    I am the CM person for a small company that just started using Git. We have two Git repositories currently hosted on a Windows box that is our all-purpose Windows server. But, we just set up a dedicated server for our CM software on an Ubuntu Linux server named "Callisto". So I created a test Git repository on Callisto. I gave its directory all of the proper permissions recursively. I had the sysadmin create a login for me on Callisto, and I created a key to use for logging in via SSH. I set up my key to use a passphrase; I don't know if that could be contributing to my problems? Anyway, I know my SSH login works because I tested it through puTTY. But, even after hours of trials and head scratching, I can't get my Windows Git bash (mSysGit) to talk to Callisto for the purposes of pushing or pulling Callisto's git repository files. I keep getting "Fatal error. The remote end hung up unexpectedly." And I've even gotten the error that Git doesn't recognize the test repository on Callisto as a git repository. I read online that the "Fatal error...hung up unexpectedly" is usually a problem with the server connection or permissions. So what am I missing or overlooking here? And why doesn't a pull using the git:// protocol work, since that only uses read-only access? Group and public permissions for the git repository's directory on Callisto are set to read and execute, but not write. If anyone could help, I would be so grateful. Thank you.

    Read the article

  • using spring, hibernate and scala, is there a better way to load test data than dbunit?

    - by egervari
    Here are some things I really dislike about dbunit: 1) You cannot specify the exact ordering the inserts because dbunit likes to group your inserts by table name, and not by the order you define them in the XML file. This is a problem when you have records depending on other records in other tables, so you have to disable foreign key constraints during your tests... which actually sucks because these foreign key constraints will get fired in production while your tests won't be aware of them! 2) They seem hellbent on forcing you to use an xml namespace to define your xml... and I honestly can't be bothered to do this. I like the data.xml without any namespace. It works. But they are so hellbent on deprecating it. 3) Creating different xml files is hard on a per test basis, so it actually encourages creating data for your entire app. Unfortunately, this process is a little bloated too once the data grows in size and things get inter tangled. There has got to be a better way to split up your test data into chunks without having to copy/paste a lot of the test data across all of your tests. 4) Keeping track of id references in a big xml file is just impossible. If you have 130 domain classes, it just gets bewildering. This model simply does not scale. Is there something less bloated and better in the Spring/Hibernate space? db unit has worn out its welcome and I'm really looking for something better.

    Read the article

< Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >