Search Results

Search found 9929 results on 398 pages for 'azure tables'.

Page 354/398 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • Looking for Simplified Overview of EJB3

    - by sdoca
    Hi I'm looking for a simplified overview of EJB3 components. I seem to understand most of the pieces of the puzzle, but can't quite get them to fit together in my brain as a full picture. I've developed numerous web applications (wars) that have been deployed on Tomcat before, but not a full-fledged EE application (ear). I would like the overview to be as generic as possible. I'm not looking for a tutorial on how to set up EJB3 on Glassfish built in NetBeans or some other vendor specific tutorial that's more about the IDE than the technology. I keep reading about Java, ejb-jar, web and ear modules but am not clear on what these different modules contain and how to use them to put together my app. In my case, I want to write a simple database CRUD web application. The first step is simple; create entity classes that model the database tables my app will be using. I plan on using annotations. Should I create a jar that contains just these enity classes? Is this the ejb-jar module (sometimes referred to as the Java module)? Next, I'll need some business logic classes that make use of the entity classes. These are the session beans (stateless or stateful) correct? Should these be packaged in the same jar as the entity classes or a separate jar? Finally, I'll need some sort of web interface (I'll be creating a JSF portlet) application that makes use of the both the session and entity beans. Together with the above jar(s), this will be my war? Assuming the above to be correct, what is involved in creating an ear? Forgive me if this post is vague, but I'm having a hard time defining what it is I don't understand. Thanks for any help!

    Read the article

  • Need help/suggestions for creating fantasy sports scoring databases and queries

    - by MGumbel
    I'm trying to create a website for my friends and I to keep track of fantasy sports scoring. So far, I've been doing the calculations and storage in Excel, which is very tedious. I'm trying to make it more simplified and automated through a SQL database that I can then wrap a web app around to enter daily stat updates. It's premised on our participation in another commercial site where we trade virtual shares of athletes, and thus acquire an "ownership percentage" in each athlete. For instance, if there are 100 shares of AROD, and I own 10 shares, then I own 10%. It then applies this to traditional baseball rotisserie scoring. So, for instance, if AROD has 1 HR today, then his adjusted HR stat would be 1.10. If he also has 2 RBI's, then his adjusted RBI stat today would be 2.20, based on (2 x 1.10)(1 to normalize the stat, and the .10 to represent the ownership percentage). All the stats for my team would then be summed each day and added to my stat history to come to an aggregated total. After that, points are allocated based on the ranking of each participant in each category at the end of the day. E.g. if there are 10 participants, and I have the highest total aggregate number of Adjusted HR's, then I get 10 pts. The points are then summed across the different stat categories to come up with a total point ranking for that day. An added difficulty is that ownership %'s can change on a daily basis. So far, in playing around with different schema, I don't know that having a separate table for each athlete's stats and each player's ownership %'s is the wisest choice. It seems to me that simply having two tables, one that contains the daily stat information for each athlete, and another that shows the ownership % of each player. My friend suggested using a start and end date for each ownership % to represent the potential daily changes in this category. I'm admittedly new to database development, so any suggestions on query code would be appreciated.

    Read the article

  • Forming triangles from points and relations

    - by SiN
    Hello, I want to generate triangles from points and optional relations between them. Not all points form triangles, but many of them do. In the initial structure, I've got a database with the following tables: Nodes(id, value) Relations(id, nodeA, nodeB, value) Triangles(id, relation1_id, relation2_id, relation3_id) In order to generate triangles from both nodes and relations table, I've used the following query: INSERT INTO Triangles SELECT t1.id, t2.id , t3.id, FROM Relations t1, Relations t2, Relations t3 WHERE t1.id < t2.id AND t3.id > t1.id AND ( t1.nodeA = t2.nodeA AND (t3.nodeA = t1.nodeB AND t3.nodeB = t2.nodeB OR t3.nodeA = t2.nodeB AND t3.nodeB = t1.nodeB) OR t1.nodeA = t2.nodeB AND (t3.nodeA = t1.nodeB AND t3.nodeB = t2.nodeA OR t3.nodeA = t2.nodeA AND t3.nodeB = t1.nodeB) ) It's working perfectly on small sized data. (~< 50 points) In some cases however, I've got around 100 points all related to each other which leads to thousands of relations. So when the expected number of triangles is in the hundreds of thousands, or even in the millions, the query might take several hours. My main problem is not in the select query, while I see it execute in Management Studio, the returned results slow. I received around 2000 rows per minute, which is not acceptable for my case. As a matter of fact, the size of operations is being added up exponentionally and that is terribly affecting the performance. I've tried doing it as a LINQ to object from my code, but the performance was even worse. I've also tried using SqlBulkCopy on a reader from C# on the result, also with no luck. So the question is... Any ideas or workarounds?

    Read the article

  • [LaTeX] positions of page numbers, position of chapter headings, chapters AND Table of Contents, Ref

    - by kaikanmonaco
    I am writing my PhD thesis (120+ pages) in latex, the deadline is approaching and I am struggling with layout problems. I am using the documentstyle book. I am posting both problems in this one thread because I am not sure if the solution might be related to both problems or not. Problems are: 1.) The page numbers are mostly located on the top-right of each page (this is correct and where I want them to be). However, only on the first page of chapters and on the first page of what I call "special chapters", the page number is located bottom-centered. With "special chapters" I mean: List of Contents, List of Figures, List of Tables, References, Index. My university will not accept the thesis like this. The page number must ALWAYS be top-right one each page, even if the page is the first page of a chapter or the first page of something like the List of Contents. How can I fix this? 2.) On the first page of chapters and "special chapters" (List of Contents...), the chapter title is located far too low on the page. This is the standard layout of LaTeX with documentstyle book I think. However, the chapter title must start at the very top of the page! I.e. the same height as the normal text on the pages that follow. I mean the chapter title, not the header. I.e., if there is a chapter called "Chapter 1 Dynamics of foobar under mechanical stress" then that text has to start from the top the page, but right now it starts several centimeters below the top. How can I fix this? Have tried all kinds of things to no effect, I'd be very thankful for a solution! Thanks.

    Read the article

  • How to make Spring load a JDBC Driver BEFORE initializing Hibernate's SessionFactory?

    - by Bill_BsB
    I'm developing a Spring(2.5.6)+Hibernate(3.2.6) web application to connect to a custom database. For that I have custom JDBC Driver and Hibernate Dialect. I know for sure that these custom classes work (hard coded stuff on my unit tests). The problem, I guess, is with the order on which things get loaded by Spring. Basically: Custom Database initializes Spring load beans from web.xml Spring loads ServletBeans(applicationContext.xml) Hibernate kicks in: shows version and all the properties correctly loaded. Hibernate's HbmBinder runs (maps all my classes) LocalSessionFactoryBean - Building new Hibernate SessionFactory DriverManagerConnectionProvider - using driver: MyCustomJDBCDriver at CustomDBURL I get a SQLException: No suitable driver found for CustomDBURL Hibernate loads the Custom Dialect My CustomJDBCDriver finally gets registered with DriverManager (log messages) SettingsFactory runs SchemaExport runs (hbm2ddl) I get a SQLException: No suitable driver found for CustomDBURL (again?!) Application get successfully deployed but there are no tables on my custom Database. Things that I tried so far: Different techniques for passing hibernate properties: embedded in the 'sessionFactory' bean, loaded from a hibernate.properties file. Nothing worked but I didn't try with hibernate.cfg.xml file neither with a dataSource bean yet. MyCustomJDBCDriver has a static initializer block that registers it self with the DriverManager. Tried different combinations of lazy initializing (lazy-init="true") of the Spring beans but nothing worked. My custom JDBC driver should be the first thing to be loaded - not sure if by Spring but...! Can anyone give me a solution for this or maybe a hint for what else I could try? I can provide more details (huge stack traces for instance) if that helps. Thanks in advance.

    Read the article

  • can this problem be solved with a single SQL query?

    - by PierrOz
    I have the two following tables (with some sample datas) LOGS: ID | SETID | DATE ======================== 1 | 1 | 2010-02-25 2 | 2 | 2010-02-25 3 | 1 | 2010-02-26 4 | 2 | 2010-02-26 5 | 1 | 2010-02-27 6 | 2 | 2010-02-27 7 | 1 | 2010-02-28 8 | 2 | 2010-02-28 9 | 1 | 2010-03-01 STATS: ID | OBJECTID | FREQUENCY | STARTID | ENDID ============================================= 1 | 1 | 0.5 | 1 | 5 2 | 2 | 0.6 | 1 | 5 3 | 3 | 0.02 | 1 | 5 4 | 4 | 0.6 | 2 | 6 5 | 5 | 0.6 | 2 | 6 6 | 6 | 0.4 | 2 | 6 7 | 1 | 0.35 | 3 | 7 8 | 2 | 0.6 | 3 | 7 9 | 3 | 0.03 | 3 | 7 10 | 4 | 0.6 | 4 | 8 11 | 5 | 0.6 | 4 | 8 7 | 1 | 0.45 | 5 | 9 8 | 2 | 0.6 | 5 | 9 9 | 3 | 0.02 | 5 | 9 Every day new logs are analyzed on different sets of objects and stored in table LOGS. Among other processes, some statistics are computed on the objects contained into these sets and the result are stored in table STATS. These statistic are computed through several logs (identified by the STARTID and ENDID columns). So, what could be the SQL query that would give me the latest computed stats for all the objects with the corresponding log dates. In the given example, the result rows would be: OBJECTID | SETID | FREQUENCY | STARTDATE | ENDDATE ====================================================== 1 | 1 | 0.45 | 2010-02-27 | 2010-03-01 2 | 1 | 0.6 | 2010-02-27 | 2010-03-01 3 | 1 | 0.02 | 2010-02-27 | 2010-03-01 4 | 2 | 0.6 | 2010-02-26 | 2010-02-28 5 | 2 | 0.6 | 2010-02-26 | 2010-02-28 So, the most recent stats for set 1 are computed with logs from feb 27 to march 1 whereas stats for set 2 are computed from feb 26 to feb 28. object 6 is not in the results rows as there is no stat on it within the last period of time. Last thing, I use MySQL. Any Idea ?

    Read the article

  • How can I concisely copy multiple SQL rows, with minor modifications?

    - by Steve Jessop
    I'm copying a subset of some data, so that the copy will be independently modifiable in future. One of my SQL statements looks something like this (I've changed table and column names): INSERT Product( ProductRangeID, Name, Weight, Price, Color, And, So, On ) SELECT @newrangeid AS ProductRangeID, Name, Weight, Price, Color, And, So, On FROM Product WHERE ProductRangeID = @oldrangeid and Color = 'Blue' That is, we're launching a new product range which initially just consists of all the blue items in some specified current range, under new SKUs. In future we may change the "blue-range" versions of the products independently of the old ones. I'm pretty new at SQL: is there something clever I should do to avoid listing all those columns, or at least avoid listing them twice? I can live with the current code, but I'd rather not have to come back and modify it if new columns are added to Product. In its current form it would just silently fail to copy the new column if I forget to do that, which should show up in testing but isn't great. I am copying every column except for the ProductRangeID (which I modify), the ProductID (incrementing primary key) and two DateCreated and timestamp columns (which take their auto-generated values for the new row). Btw, I suspect I should probably have a separate join table between ProductID and ProductRangeID. I didn't define the tables. This is in a T-SQL stored procedure on SQL Server 2008, if that makes any difference.

    Read the article

  • Issue changing innodb_log_file_size

    - by savageguy
    I haven't done much tweaking in the past so this might be relatively easy however I am running into issues. This is what I do: Stop MySQL Edit my.cnf (changing innodb_log_file_size) Remove ib_logfile0/1 Start MySQL Starts fine however all InnoDB tables have the .frm file is invalid error, the status shows InnoDB engine is disabled so I obviously go back, remove the change and everything works again. I was able to change every other variable I've tried but I can't seem to find out why InnoDB fails to start even after removing the log files. Am I missing something? Thanks. Edit: Pasting of the log below - looks like it still seems to find the log file even though they are not there? Shutdown: 090813 10:00:14 InnoDB: Starting shutdown... 090813 10:00:17 InnoDB: Shutdown completed; log sequence number 0 739268981 090813 10:00:17 [Note] /usr/sbin/mysqld: Shutdown complete Startup after making the changes: InnoDB: Error: log file ./ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 268435456 bytes! 090813 11:00:18 [Warning] 'user' entry '[email protected]' ignored in --skip-name-resolve mode. 090813 11:00:18 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.0.81-community-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Edition (GPL) 090813 11:00:19 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './XXXX/User.frm' 090813 11:00:19 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './XXXX/User.frm' 090813 11:00:19 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './XXXX/User.frm' Its just a spam of the same error until I correct it When it did start after it recreated the log files so it must be looking in the same spot I am.

    Read the article

  • Can a Snapshot transaction fail and only partially commit in a TransactionScope?

    - by Travis Brooks
    Greetings I stumbled onto a problem today that seems sort of impossible to me, but its happening...I'm calling some database code in c# that looks something like this: using(var tran = MyDataLayer.Transaction()) { MyDataLayer.ExecSproc(new SprocTheFirst(arg1, arg2)); MyDataLayer.CallSomethingThatEventuallyDoesLinqToSql(arg1, argEtc); tran.Commit(); } I've simplified this a bit for posting, but whats going on is MyDataLayer.Transaction() makes a TransactionScope with the IsolationLevel set to Snapshot and TransactionScopeOption set to Required. This code gets called hundreds of times a day, and almost always works perfectly. However after reviewing some data I discovered there are a handful of records created by "SprocTheFirst" but no corresponding data from "CallSomethingThatEventuallyDoesLinqToSql". The only way that records should exist in the tables I'm looking at is from SprocTheFirst, and its only ever called in this one function, so if its called and succeeded then I would expect CallSomethingThatEventuallyDoesLinqToSql would get called and succeed because its all in the same TransactionScope. Its theoretically possible that some other dev mucked around in the DB, but I don't think they have. We also log all exceptions, and I can find nothing unusual happening around the time that the records from SprocTheFirst were created. So, is it possible that a transaction, or more properly a declarative TransactionScope, with Snapshot isolation level can fail somehow and only partially commit?

    Read the article

  • Geohashing - recursively find neighbors of neighbors

    - by itsme
    I am now looking for an elegant algorithm to recursively find neighbors of neighbors with the geohashing algorithm (http://www.geohash.org). Basically take a central geohash, and then get the first 'ring' of same-size hashes around it (8 elements), then, in the next step, get the next ring around the first etc. etc. Have you heard of an elegant way to do so? Brute force could be to take each neighbor and get their neighbors simply ignoring the massive overlap. Neighbors around one central geohash has been solved many times (here e.g. in Ruby: http://github.com/masuidrive/pr_geohash/blob/master/lib/pr_geohash.rb) Edit for clarification: Current solution, with passing in a center key and a direction, like this (with corresponding lookup-tables): def adjacent(geohash, dir) base, lastChr = geohash[0..-2], geohash[-1,1] type = (geohash.length % 2)==1 ? :odd : :even if BORDERS[dir][type].include?(lastChr) base = adjacent(base, dir) end base + BASE32[NEIGHBORS[dir][type].index(lastChr),1] end (extract from Yuichiro MASUI's lib) I say this approach will get ugly soon, because directions gets ugly once we are in ring two or three. The algorithm would ideally simply take two parameters, the center area and the distance from 0 being the center geohash only (["u0m"] and 1 being the first ring made of 8 geohashes of the same size around it (= [["u0t", "u0w"], ["u0q", "u0n"], ["u0j", "u0h"], ["u0k", "u0s"]]). two being the second ring with 16 areas around the first ring etc. Do you see any way to deduce the 'rings' from the bits in an elegant way?

    Read the article

  • design pattern for related inputs

    - by curiousMo
    My question is a design question : let's say i have a data entry web page with 4 drop down lists, each depending on the previous one, and a bunch of text boxes. ddlCountry (DropDownList) ddlState (DropDownList) ddlCity (DropDownList) ddlBoro (DropDownList) txtAddress (TxtBox) txtZipcode(TxtBox) and an object that represents a datarow with a value for each: countrySeqid stateSeqid citySeqid boroSeqid address zipCode naturally the country, state, city and boro values will be values of primary keys of some lookup tables. when the user chooses to edits that record, i would load it from database and load it into the page. the issue that I have is how to streamline loading the DropDownLists. i have some code that would grab the object,look thru its values and move them to their corresponding input controls in one shot. but in this case i will have to load the ddlCountry with possible values, then assign values, then do the same thing for the rest of the ddls. I guess i am looking for an elegant solution. i am using asp.net, but i think it is irrelevant to the question. i am looking more into a design pattern.

    Read the article

  • iPhone - Using sql database - insert statement failing

    - by Satyam svv
    Hi, I'm using sqlite database in my iphone app. I've a table which has 3 integer columns. I'm using following code to write to that database table. -(BOOL)insertTestResult { NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString* documentsDirectory = [paths objectAtIndex:0]; NSString* dataBasePath = [documentsDirectory stringByAppendingPathComponent:@"test21.sqlite3"]; BOOL success = NO; sqlite3* database = 0; if(sqlite3_open([dataBasePath UTF8String], &database) == SQLITE_OK) { BOOL res = (insertResultStatement == nil) ? createStatement(insertResult, &insertResultStatement, database) : YES; if(res) { int i = 1; sqlite3_bind_int(insertResultStatement, 0, i); sqlite3_bind_int(insertResultStatement, 1, i); sqlite3_bind_int(insertResultStatement, 2, i); int err = sqlite3_step(insertResultStatement); if(SQLITE_ERROR == err) { NSAssert1(0, @"Error while inserting Result. '%s'", sqlite3_errmsg(database)); success = NO; } else { success = YES; } sqlite3_finalize(insertResultStatement); insertResultStatement = nil; } } sqlite3_close(database); return success;} The command sqlite3_step is always giving err as 19. I'm not able to understand where's the issue. Tables are created using following queries: CREATE TABLE [Patient] (PID integer NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,PFirstName text NOT NULL,PLastName text,PSex text NOT NULL,PDOB text NOT NULL,PEducation text NOT NULL,PHandedness text,PType text) CREATE TABLE PatientResult(PID INTEGER,PFreeScore INTEGER NOT NULL,PForcedScore INTEGER NOT NULL,FOREIGN KEY (PID) REFERENCES Patient(PID)) I've only one entry in Patient table with PID = 1 BOOL createStatement(const char* query, sqlite3_stmt** stmt, sqlite3* database){ BOOL res = (sqlite3_prepare_v2(database, query, -1, stmt, NULL) == SQLITE_OK); if(!res) NSLog( @"Error while creating %s => '%s'", query, sqlite3_errmsg(database)); return res;}

    Read the article

  • EFv1 mapping 1 to many Relationship to POCOs

    - by Scott
    I'm trying to work through a problem where I'm mapping EF Entities to POCO which serve as DTO. I have two tables within my database, say Products and Categories. A Product belongs to one category and one category may contain many Products. My EF entities are named efProduct and efCategory. Within each entity there is the proper Navigation Property between efProduct and efCategory. My Poco objects are simple public class Product { public string Name { get; set; } public int ID { get; set; } public double Price { get; set; } public Category ProductType { get; set; } } public class Category { public int ID { get; set; } public string Name { get; set; } public List<Product> products { get; set; } } To get a list of products I am able to do something like public IQueryable<Product> GetProducts() { return from p in ctx.Products select new Product { ID = p.ID, Name = p.Name, Price = p.Price ProductType = p.Category }; } However there is a type mismatch error because p.Category is of type efCategory. How can I resolve this? That is, how can I convert p.Category to type Category? I know in .NET EF has added support for POCO, but I'm forced to use .NET 3.5 SP1.

    Read the article

  • Why does the WCF 3.5 REST Starter Kit do this?

    - by Brandon
    I am setting up a REST endpoint that looks like the following: [WebInvoke(Method = "POST", UriTemplate = "?format=json", BodyStyle = WebMessageBodyStyle.WrappedRequest, ResponseFormat = WebMessageFormat.Json)] and [WebInvoke(Method = "DELETE", UriTemplate = "?token={token}&format=json", ResponseFormat = WebMessageFormat.Json)] The above throws the following error: UriTemplateTable does not support '?format=json' and '?token={token}&format=json' since they are not equivalent, but cannot be disambiguated because they have equivalent paths and the same common literal values for the query string. See the documentation for UriTemplateTable for more detail. I am not an expert at WCF, but I would imagine that it should map first by the HTTP Method and then by the URI Template. It appears to be backwards. If both of my URI templates are: ?token={token}&format=json This works because they are equivalent and it then appears to look at the HTTP Method where one is POST and the other is DELETE. Is REST supposed to work this way? Why are the URI Template Tables not being sorted first by HTTP Method and then by URI Template? This can cause some serious frustrations when 1 HTTP Method requires a parameter and another does not, or if I want to do optional parameters (e.g. if the 'format' parameter is not passed, default to XML).

    Read the article

  • How to Store and Retrieve Images Using SQL Server (Server Management Studio)

    - by Joe Majewski
    I am having difficulties when trying to insert files into a SQL Server database. I'll try to break this down as best as I can: What data type should I be using to store image files (jpeg/png/gif/etc)? Right now my table is using the image data type, but I am curious if varbinary would be a better option. How would I go about inserting the image into the database? Does Microsoft SQL Server Management Studio have any built in functions that allow insertions of files into tables? If so, how is that done? Also, how could this be done through the use of an HTML form with PHP handling the input data and placing it into the table? How would I fetch the image from the table and display it on the page? I understand how to SELECT the cell's contents, but how would I go about translating that into a picture. Would I have to have a header(Content type: image/jpeg)? I have no problem doing any of these things with MySQL, but the SQL Server environment is still new to me, and I am working on a project for my job that requires the use of stored procedures to grab various data. Any and all help is appreciated.

    Read the article

  • Hibernate NamingStrategy implementation that maintains state between calls

    - by Robert Petermeier
    Hi, I'm working on a project where we use Hibernate and JBoss 5.1. We need our entity classes to be mapped to Oracle tables that follow a certain naming convention. I'd like to avoid having to specify each table and column name in annotations. Therefore, I'm currently considering implementing a custom implementation of org.hibernate.cfg.NamingStrategy. The SQL naming conventions require the name of columns to have a suffix that is equivalent to a prefix of the table name. If there is a table "T100_RESOURCE", the ID column would have to be named "RES_ID_T100". In order to implement this in a NamingStrategy, the implementation would have to maintain state, i.e. the current class name it is creating the mappings for. It would rely on Hibernate to always call classToTableName() before propertyToColumnName() and to determine all column names by calling propertyToColumnName() before the next call to classToTableName() Is it safe to do that or are there situations where Hibernate will mix things up? I am not thinking of problems through multiple threads here (which can be solved by keeping the last class name in a ThreadLocal) but also of Hibernate deliberately calling this out of order in certain circumstances. For example Hibernate asking for mappings of three properties of class A, then one of class B, then again more attributes of class A.

    Read the article

  • How can I efficiently manipulate 500k records in SQL Server 2005?

    - by cdeszaq
    I am getting a large text file of updated information from a customer that contains updates for 500,000 users. However, as I am processing this file, I often am running into SQL Server timeout errors. Here's the process I follow in my VB application that processes the data (in general): Delete all records from temporary table (to remove last month's data) (eg. DELETE * FROM tempTable) Rip text file into the temp table Fill in extra information into the temp table, such as their organization_id, their user_id, group_code, etc. Update the data in the real tables based on the data computed in the temp table The problem is that I often run commands like UPDATE tempTable SET user_id = (SELECT user_id FROM myUsers WHERE external_id = tempTable.external_id) and these commands frequently time out. I have tried bumping the timeouts up to as far as 10 minutes, but they still fail. Now, I realize that 500k rows is no small number of rows to manipulate, but I would think that a database purported to be able to handle millions and millions of rows should be able to cope with 500k pretty easily. Am I doing something wrong with how I am going about processing this data? Please help. Any and all suggestions welcome.

    Read the article

  • Doctrine: How to traverse from an entity to another 'linked' entity?

    - by ropstah
    I'm loading 3 different tables using a cross-join in Doctrine_RawSql. This brings me back the following object: User -> User class (doctrine base class) Settings -> DoctrineCollection of Setting User_Settings -> DoctrineCollection of User_Setting The object above is the result of a many-to-many relationship between User and Setting where User_Setting acts as a reference table. User_Setting also contains another field named value. This obviously contains the value of the corresponding Setting. All good so far, however the Settings and User_Settings properties of the returned User object are in no way linked to each other (apart from the setting_id field ofcourse). Is there any direct way to traverse directly from the Settings property to the corresponding User_Settings property? This is the corresponding query: $sets = new Doctrine_RawSql(); $sets->select('{us.*}, {s.*}, {uset.*}') ->from('(User us CROSS JOIN Setting s) LEFT JOIN User_Setting uset ON us.user_id = uset.user_id AND s.setting_id = uset.setting_id') ->addComponent('us', 'User us') ->addComponent('uset', 'us.User_Setting uset') ->addComponent('s', 'us.Setting s') ->where('s.category_id = ? AND us.usr_auto_key = ?',array(1, 1)); $sets = $sets->execute();

    Read the article

  • Slow Response in checkbox using JQuery

    - by Dean
    Hi to all. I'm trying to optimize a website. The flow is i tried to query a certain table and all of its data entry in a page(w/ toolbars), work fine. When i tried to edit the page the problem is when i click the checkbox button i have to wait 2-5sec just by clicking it. I limit the viewing of entry to 5 only but the response on checkbox doesn't change. The tables have 100 entries in them. function checkAccess(celDiv,id) { var celValue = $(celDiv).html(); if (celValue==1) $(celDiv).html("<input type='checkbox' value='"+$(celDiv).html()+"' checked disabled>") else $(celDiv).html("<input type='checkbox' value='"+$(celDiv).html()+"' disabled>") $(celDiv).click ( function() { $('input',this).each( function(){ tr_idx = $('#detFlex1 tbody tr').index($(this).parent().parent().parent()); td_idx = $('#detFlex1 tbody tr:eq('+tr_idx+') td').index($(this).parent().parent()); td_last = 13; for(var td=td_idx+1; td<=td_last;td++) { if ($(this).attr('checked') == true) { df[0].rows[tr_idx].cell[td_idx] = 1;//index[1] = Full Access if (td_idx==3) { df[0].rows[tr_idx].cell[td] = 1; } df[0].rows[tr_idx].cell[2] = 1; if (td_idx > 3) { df[0].rows[tr_idx].cell[2] = 1; } } else { df[0].rows[tr_idx].cell[td_idx] = 0;//index[0] = With Access if (td_idx==2) { df[0].rows[tr_idx].cell[td] = 0; } else if (td_idx==3) { df[0].rows[tr_idx].cell[td] = 0; } if (td_idx > 3) { df[0].rows[tr_idx].cell[3] = 0; } } } $('#detFlex1').flexAddData(df[0]); $('.toolbar a[title=Edit Item]').trigger('click'); }); } ); } I've thought that the problem is this above code. Could anyone help me simplify this code.?

    Read the article

  • PL-SQL - Two statements with begin and end, run fine seperately but not together?

    - by Twiss
    Hi all, Just wondering if anyone can help with this, I have two PLSQL statements for altering tables (adding extra fields) and they are as follows: -- Make GC_NAB field for Next Action By Dropdown begin if 'VARCHAR2' = 'NUMBER' and length('VARCHAR2')>0 and length('')>0 then execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NAB VARCHAR2(10, ))'; elsif ('VARCHAR2' = 'NUMBER' and length('VARCHAR2')>0 and length('')=0) or 'VARCHAR2' = 'VARCHAR2' then execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NAB VARCHAR2(10))'; else execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NAB VARCHAR2)'; end if; commit; end; -- Make GC_NABID field for Next Action By Dropdown begin if 'NUMBER' = 'NUMBER' and length('NUMBER')>0 and length('')>0 then execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NABID NUMBER(, ))'; elsif ('NUMBER' = 'NUMBER' and length('NUMBER')>0 and length('')=0) or 'NUMBER' = 'VARCHAR2' then execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NABID NUMBER())'; else execute immediate 'alter table "SERVICEMAIL6"."ETD_GUESTCARE" add(GC_NABID NUMBER)'; end if; commit; end; When I run these two queries seperately, no problems. However, when run together as shown above, Oracle gives me an error when it starts the second statement: Error report: ORA-06550: line 15, column 1: PLS-00103: Encountered the symbol "BEGIN" 06550. 00000 - "line %s, column %s:\n%s" *Cause: Usually a PL/SQL compilation error. *Action: I'm assuming that this means the first statement is not terminated properly... is there anything I should put inbetween the statements to make it work properly? Thanks in advance everyone!

    Read the article

  • Update table using SSIS

    - by thursdaysgeek
    I am trying to update a field in a table with data from another table, based on a common key. If it were in straight SQL, it would be something like: Update EHSIT set e.IDMSObjID = s.IDMSObjID from EHSIT e, EHSIDMS s where e.SITENUM = s.SITE_CODE However, the two tables are not in the same database, so I'm trying to use SSIS to do the update. Oh, and the sitenum/site_code are varchar in one and nvarchar in the other, so I'll have to do a data conversion so they'll match. How do I do it? I have a data flow object, with the source as EHSIDMS and the destination as EHSIT. I have a data conversion to convert the unicode to non-unicode. But how do I update based on the match? I've tried with the destination, using a SQL Command as the Data Access mode, but it doesn't appear to have the source table. If I just map the field to be updated, how does it limit it based on fields matching? I'm about to export my source table to Excel or something, and then try inputting from there, although it seems that all that would get me would be to remove the data conversion step. Shouldn't there be an update data task or something? Is it one of those Data Flow transformation tasks, and I'm just not figuring out which it is?

    Read the article

  • Kohana Auth Library Deployment

    - by Steve
    My Kohana app runs perfectly on my local machine. When I deployed my app to a server (and adjust the config files appropriately), I can no longer log into the app. I've traced through the app login routine on both my local version and the server version and they both agree with each other all the way through until you get to the auth.php controller logged_in() routine where suddenly, at line 140 - the is_object($this-user) test - the $user object no longer exists!?!?!? The login() function call that calls the logged_in() function successfully passes the following test, which causes a redirect to the logged_in() function. if(Auth::instance()->login($user, $post['password'])) Yes, the password and hash, etc all work perfectly. Here is the offending code: public function logged_in() { if ( ! is_object($this->user)) { // No user is currently logged in url::redirect('auth/login'); } etc... } As the code is the same between my local installation and the server, I reckon it must be some server setting that is messing with me. FYI: All the rest of the code works because I have a temporary backdoor available that allows me to use the application (view pages of tables, etc) without being logged in. Any ideas?

    Read the article

  • Entity and N-Tier architecture in C#

    - by acadia
    Hello, I have three tables as shown below Emp ---- empID int empName deptID empDetails ----------- empDetailsID int empID int empDocuments -------------- docID empID docName docType I am creating a entity class so that I can use n-tier architecture to do database transactions etc in C#. I started creating class for the same as shown below using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace employee { class emp { private int empID; private string empName; private int deptID; public int EmpID { get; set; } public string EmpName { get; set; } public int deptID { get; set; } } } My question is as empDetails and empDocuments are related to emp by empID. How do I have those in my emp class. I would appreciate if you can direct me to an example. Thanks

    Read the article

  • How to map oracle timestamp to appropriate java type in hibernate?

    - by jschoen
    I am new to hibernate and I am stumped. In my database I have tables that have a columns of TIMESTAMP(6). I am using Netbeans 6.5.1 and when I generate the hibernate.reveng.xml, hbm.xml files, and pojo files it sets the columns to be of type Serializable. This is not what I expected, nor what I want them to be. I found this post on the hibernate forums saying to place: in the hibernate.reveng.xml file. In Netbeans you are not able to generate the mappings from this file (it creates a new one every time) and it does not seem to have the ability to re-generate them from the file either (at least according to this it is slated to be available in version 7). So I am trying to figure out what to do. I am more inclined to believe I am doing something wrong since I am new to this, and it seems like it would be a common problem for others. So what am I doing wrong? If I am not doing anything wrong, how do I work around this? I am using Netbeans 6.5, Oracle 10G, and I believe Hibernate 3 (it came with my netbeans). Edit: Meant to say I found this stackoverflow question, but it is really a different problem.

    Read the article

  • MySQL running on an EC2 m1.small instance has high load but low memory usage, possible resolutions?

    - by Tosh
    I have a MySQL server 5.0.75 Ubuntu, on an m1.small instance running on Amazon's EC2 as part of an application. During peak usage the server load will rise very high, while the memory usage stays low and the application server is no longer responsive since it's waiting for query results. The application server has only 5-8 apache processes running (mod_perl processes). The data directory uses only 140MB of data so the MyIsam tables aren't very big. The queries are pretty complicated with some big joins being performed, and the application makes a lot of queries. mysqltuner reports everything OK except "Maximum possible memory usage: 1.7G (99% of installed RAM)" but I'm nowhere close to using that. My question is, where should I be looking to fix this? Is this something that can be tuned away, or do I just need a larger instance/server? Googling indicates either or also upgrading MySQL server. Any pointers in the right direction would be greatly appreciated, thanks! EDIT: I just discovered this in my slow queries log: # Time: 101116 11:17:00 # User@Host: user[pass] @ [host] # Query_time: 4063 Lock_time: 1035 Rows_sent: 0 Rows_examined: 19960174 SELECT * FROM contacts WHERE contacts.contact_id IN (SELECT external_id FROM contact_relations WHERE external_table = 'contacts' AND contact_id IN (SELECT contact_id FROM contacts WHERE (company_name like '%%butan%%%' OR country like '%%butan%%%' OR city like '%%butan%%%' OR email1 like '%%butan%%%') AND (company_name is not null and company_name != ''))); Which actually brings up a different but related question: If I have a contact table containing: John Smith,The Fun Factory,555-1212,[email protected] What's the best way to search for that record using "factory" as a search key? Fulltext rarely seems to find items in the middle of a word, for example "actor" should bring up "Factory"

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >