Search Results

Search found 28024 results on 1121 pages for 'sql 2014'.

Page 1084/1121 | < Previous Page | 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091  | Next Page >

  • Domain validates but won't save

    - by marko
    I have the following setup. Class, say, Car that has a CarPart (belongsTo=[car:Car]). When I'm creating a Car I also want to create som default CarParts, so I do def car = new Car(bla bla bla) def part = new CarPart(car:car) Now, when I do car.validate() or part.validate() it seems fine. But when I do if(car.save && part.save() I get this exception: 2012-03-24 14:02:21,943 [http-8080-4] ERROR util.JDBCExceptionReporter - Batch entry 0 insert into car_part (version, car_id, id) values ('0', '297', '298') was aborted. Call getNextException to see the cause. 2012-03-24 14:02:21,943 [http-8080-4] ERROR util.JDBCExceptionReporter - ERROR: value too long for type character varying(6) 2012-03-24 14:02:21,943 [http-8080-4] ERROR events.PatchedDefaultFlushEventListener - Could not synchronize database state with session org.hibernate.exception.DataException: Could not execute JDBC batch update Stacktrace follows: java.sql.BatchUpdateException: Batch entry 0 insert into car_part (version, deal_id, id) values ('0', '297', '298') was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2621) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1837) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:407) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2754) at $Proxy20.flush(Unknown Source) at ristretto.DealController$_closure5.doCall(DealController.groovy:109) at ristretto.DealController$_closure5.doCall(DealController.groovy) at java.lang.Thread.run(Thread.java:722) Any ideas?

    Read the article

  • Two entities with @ManyToOne joins the same table

    - by Ivan Yatskevich
    I have the following entities Student @Entity public class Student implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; //getter and setter for id } Teacher @Entity public class Teacher implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; //getter and setter for id } Task @Entity public class Task implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @ManyToOne(optional = false) @JoinTable(name = "student_task", inverseJoinColumns = { @JoinColumn(name = "student_id") }) private Student author; @ManyToOne(optional = false) @JoinTable(name = "student_task", inverseJoinColumns = { @JoinColumn(name = "teacher_id") }) private Teacher curator; //getters and setters } Consider that author and curator are already stored in DB and both are in the attached state. I'm trying to persist my Task: Task task = new Task(); task.setAuthor(author); task.setCurator(curator); entityManager.persist(task); Hibernate executes the following SQL: insert into student_task (teacher_id, id) values (?, ?) which, of course, leads to null value in column "student_id" violates not-null constraint Can anyone explain this issue and possible ways to resolve it?

    Read the article

  • Memory Leakage using datatables

    - by Vix
    Hi, I have situation in which i'm compelled to retrieve 30,000 records each to 2 datatables.I need to do some manipulations and insert into records into the SQL server in Manipulate(dt1,dt2) function.I have to do this in 15 times as you can see in the for loop.Now I want to know what would be the effective way in terms of memory usage.I've used the first approach.Please suggest me the best approach. (1) for (int i = 0; i < 15; i++) { DataTable dt1 = GetInfo(i); DataTable dt2 = GetData(i); Manipulate(dt1,dt2); } (OR) (2) DataTable dt1 = new DataTable(); DataTable dt2 = new DataTable(); for (int i = 0; i < 15; i++) { dt1=null; dt2=null; dt1 = GetInfo(); dt2 = GetData(); Manipulate(dt1, dt2); } Thanks, Vix.

    Read the article

  • IIS doesn't send two responses to the same client at the same time (only for ASP)

    - by dr. evil
    I've got 2 ASP pages. I do a request to the first page from Firefox (which takes 30 seconds to process on server-side), and during the execution of 30 seconds I do another request from Firefox to the second page (takes less than 1 second in server-side), but it does come after 31 second. Because it waits first requests to finish. When I request to the first page from Firefox and then request the second page from IE it's just instant. So basically ASP - IIS 6 somehow limiting every client to one request (long processing request) at a time. I need to get around this problem in my .NET client application. This is tested in 3 different systems. If you want to test you can try the ASP scripts at the end. This behaviour is same in a long SQL execution or just in a time consuming ASP operation. Note: It's not about HTTP Keep Alive It's not about persistent connection limit (we tried to increase this in firefox and in .NET with Net.ServicePointManager.DefaultConnectionLimit) It's not about User Agent This doesn't happen in ASP.NET so I assume it's something to the with ASP.dll I'm trying to solve this on the client not the server. I don't have direct control over the server it's a 3rd party solution. Is there any way to get around this? Sample ASP Code: First ASP: <% Set cnn = Server.CreateObject("Adodb.Connection") cnn.Open "Provider=sqloledb;Data Source=.;Initial Catalog=master;User Id=sa;Password=;" cnn.Execute("WAITFOR DELAY '0:0:30'") cnn.Close %> Second ASP: <% Response.Write "bla bla" %>

    Read the article

  • What is the best way to do scoped finds based on access control rules in Rails?

    - by Rafael Szuminski
    Hi I need to find an elegant solution to scoped finds based on access control rules. Essentially I have the following setup: Users Customers AccessControl - Defines which user has access to another users data Users need to be able to access not just their own customers but also shared customers of other users. Obviously something like a simple association will not work: has_many :customers and neither will this: has_many :customers, :conditions => 'user_id in (1,2,3,4,5)' because the association uses with_scope and the added condition is an AND condition not an OR condition. I also tried overriding the find and method_missing methods with the association extension like this: has_many :customers do def find(*args) #get the user_id and retrieve access conditions based on the id #do a find based on the access conditions and passed args end def method_missing(*args) #get the user_id and retrieve access conditions based on the id #do a find based on the access conditions and passed args end end but the issue is that I don't have access to the user object / parent object inside the extension methods and it just does not work as planned. I also tried default_scope but as posted here before you can't pass a block to a default scope. Anyhow, I know that data segmentation and data access controls have been done before using rails and am wondering if somebody found an elegant way to do it. UPDATE: The AccessControl table has the following layout user_id shared_user_id The customer table has this structure: id account_id user_id first_name last_name Assuming the the following data would be in the AccessControl table: 1 1 1 3 1 4 2 2 2 13 and so on... And the account_id for user 1 is 13 I need to be able to retrieve customers that can be best described with the following sql statement: select * from customers where (account_id = 13 and user_id = null) or (user_id in (1,3,4))

    Read the article

  • Working with PHP and MySQL - need a good and secure design with OO design

    - by Andrew
    I am new to PHP- first time developer. I am working on my web application and it is nearly done; nevertheless, most of my sql was done directly via code using direct mysql requests. This is the way I approached it: In classes_db.php I declared the db settings and created methods that I use to open and close DB connections. I declare those objects on my regular pages: class classes_db { public $dbserver = 'server; public $dbusername = 'user'; public $dbpassword = 'pass'; public $dbname = 'db'; function openDb() { $dbhandle = mysql_connect($this->dbserver, $this->dbusername, $this->dbpassword); if (!$dbhandle) { die('Could not connect: ' . mysql_error()); } $selected = mysql_select_db($this->dbname, $dbhandle) or die("Could not select the database"); return $dbhandle; } function closeDb($con) { mysql_close($con); } } On my regular page, I do this: <?php require 'classes_db.php'; session_start(); //create instance of the DB class $db = new classes_db(); //get dbhandle $dbhandle = $db->openDb(); //process query $result = mysql_query("update user set username = '" . $usernameFromForm . "' where iduser= " . $_SESSION['user']->iduser); //close the connection if (isset($dbhandle)) { $db->closeDb($dbhandle); } ?> My questions is: how to do it right and make it OO and secure? I know that I need incorporate prepared queries- how to do it the best way? Please provide some code

    Read the article

  • Call Oracle package function using Odbc from C#

    - by Paolo Tedesco
    I have a function defined inside an Oracle package: CREATE OR REPLACE PACKAGE BODY TESTUSER.TESTPKG as FUNCTION testfunc(n IN NUMBER) RETURN NUMBER as begin return n + 1; end testfunc; end testpkg; / How can I call it from C# using Odbc? I tried the following: using System; using System.Data; using System.Data.Odbc; class Program { static void Main(string[] args) { using (OdbcConnection connection = new OdbcConnection("DSN=testdb;UID=testuser;PWD=testpwd")) { connection.Open(); OdbcCommand command = new OdbcCommand("TESTUSER.TESTPKG.testfunc", connection); command.CommandType = System.Data.CommandType.StoredProcedure; command.Parameters.Add("ret", OdbcType.Int).Direction = ParameterDirection.ReturnValue; command.Parameters.Add("n", OdbcType.Int).Direction = ParameterDirection.Input; command.Parameters["n"].Value = 42; command.ExecuteNonQuery(); Console.WriteLine(command.Parameters["ret"].Value); } } } But I get an exception saying "Invalid SQL Statement". What am I doing wrong?

    Read the article

  • How to specify multiple values in where with AR query interface in rails3

    - by wkhatch
    Per section 2.2 of rails guide on Active Record query interface here: which seems to indicate that I can pass a string specifying the condition(s), then an array of values that should be substituted at some point while the arel is being built. So I've got a statement that generates my conditions string, which can be a varying number of attributes chained together with either AND or OR between them, and I pass in an array as the second arg to the where method, and I get: ActiveRecord::PreparedStatementInvalid: wrong number of bind variables (1 for 5) which leads me to believe I'm doing this incorrectly. However, I'm not finding anything on how to do it correctly. To restate the problem another way, I need to pass in a string to the where method such as "table.attribute = ? AND table.attribute1 = ? OR table.attribute1 = ?" with an unknown number of these conditions anded or ored together, and then pass something, what I thought would be an array as the second argument that would be used to substitute the values in the first argument conditions string. Is this the correct approach, or, I'm just missing some other huge concept somewhere and I'm coming at this all wrong? I'd think that somehow, this has to be possible, short of just generating a raw sql string.

    Read the article

  • How to insert and call by row and column into sqlite3 python, great tutorial problem.

    - by user291071
    Lets say i have a simple array of x rows and y columns with corresponding values, What is the best method to do 3 things? How to insert, update a value at a specific row column? How to select a value for each row and column, import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() c.execute('''create table simple (links text)''') con.commit() dic = {'x1':{'y1':1.0,'y2':0.0},'x2':{'y1':0.0,'y2':2.0,'y3':1.5},'x3':{'y2':2.0,'y3':1.5}} ucols = {} ## my current thoughts are collect all row values and all column values from dic and populate table row and columns accordingly how to call by row and column i havn't figured out yet ##populate rows in first column for row in dic: print row c.execute("""insert into simple ('links') values ('%s')"""%row) con.commit() ##unique columns for row in dic: print row for col in dic[row]: print col ucols[col]=dic[row][col] ##populate columns for col in ucols: print col c.execute("alter table simple add column '%s' 'float'" % col) con.commit() #functions needed ##insert values into sql by row x and column y?how to do this e.g. x1 and y2 should put in 0.0 ##I tried as follows didn't work for row in dic: for col in dic[row]: val =dic[row][col] c.execute("""update simple SET '%s' = '%f' WHERE 'links'='%s'"""%(col,val,row)) con.commit() ##update value at a specific row x and column y? ## select a value at a specific row x and column y?

    Read the article

  • assembling an object graph without an ORM -- in the service layer or data layer?

    - by Hans Gruber
    At my current gig, our persistence layer uses IBatis going against SQL Server stored procedures (puke). IMHO, this approach has many disadvantages over the use of a "true" ORM such NHibernate or EF, but the one I'm trying to address here revolves around all the boilerplate code needed to map data from a result set into an object graph. Say I have the following DTO object graph I want to return to my presentation layer: IEnumerable<CustomerDTO> |--> IEnumerable<AddressDTO> |--> LatestOrderDTO The way I've implemented this is to have a discrete method in my DAO class to return each IEnumerable<*DTO>, and then have my service class be responsible for orchestrating the calls to the DAO. It then returns the fully assembled object graph to the client: public class SomeService(){ public SomeService(IDao someDao){ this._someDao = someDao; } public IEnumerable<CustomerDTO> ListCustomersForHistory(int brokerId){ var customers = _someDao.ListCustomersForBroker(brokerId); foreach (customer in customers){ customer.Addresses = someDao.ListCustomersAddresses(brokerId); customer.LatestOrder = someDao.GetCustomerLatestOrder(brokerId); } } return customers; } My question is should this logic belong in the service layer or the should I make my DAO such that it instead returns the assembled object graph. If I was using NHibernate, I assume that this kind of relationship association between objects comes for "free"?

    Read the article

  • ASP.MVC 2 Model Data Persistance

    - by toccig
    I'm and MVC1 programmer, new to the MVC2. The data will not persist to the database in an edit scenario. Create works fine. Controller: // // POST: /Attendee/Edit/5 [Authorize(Roles = "Admin")] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(Attendee attendee) { if (ModelState.IsValid) { UpdateModel(attendee, "Attendee"); repository.Save(); return RedirectToAction("Details", attendee); } else { return View(attendee); } } Model: [MetadataType(typeof(Attendee_Validation))] public partial class Attendee { } public class Attendee_Validation { [HiddenInput(DisplayValue = false)] public int attendee_id { get; set; } [HiddenInput(DisplayValue = false)] public int attendee_pin { get; set; } [Required(ErrorMessage = "* required")] [StringLength(50, ErrorMessage = "* Must be under 50 characters")] public string attendee_fname { get; set; } [StringLength(50, ErrorMessage = "* Must be under 50 characters")] public string attendee_mname { get; set; } } I tried to add [Bind(Exclude="attendee_id")] above the Class declaration, but then the value of the attendee_id attribute is set to '0'. View (Strongly-Typed): <% using (Html.BeginForm()) {%> ... <%=Html.Hidden("attendee_id", Model.attendee_id) %> ... <%=Html.SubmitButton("btnSubmit", "Save") %> <% } %> Basically, the repository.Save(); function seems to do nothing. I imagine it has something to do with a primary key constraint violation. But I'm not getting any errors from SQL Server. The application appears to runs fine, but the data is never persisted to the Database.

    Read the article

  • Invalid Argument javascript error only on certain computers

    - by Jen
    Getting an error whenever we click a particular button/link on our site. It is generating a javascript "Invalid Argument" error. I know in the other posts it is typically because it is a syntax error in the javascript however it only just seems to have started happening and it doesn't happen on all pcs. ie. in our client's environment if I remote onto their web server and view the uat website I get the javascript error. If I remote onto their sql server and view the uat website I don't get the javascript error. If it was a syntax error then I would always get the error wouldn't I? both browsers are the same version of IE6 (yeah I know...) :) I have tried deleting temporary internet files - including viewing the files and deleting them myself - but no joy. client uses citrix.. and they're all getting the error :( Any ideas would be appreciated - Thanks! :) Update - Sorry I haven't posted specific code as there is too much to post (and I'm not sure where the error is occurring). The "button" launches a new window which in turn opens up a couple of aspx pages and calls lots of javascript. So the window opens ok, and there's a function that gets called to resize the window - but before it calls the resizing of the window/content it throws the invalid argument error. Am busy trying to get alerts to trigger to see if I can see where it's falling over but so far no luck. Again not sure why this error doesn't occur when I use a particular PC (same browser version)

    Read the article

  • Where are the network boundaries in the Java Connector Architecture (JCA)?

    - by Laird Nelson
    I am writing a JCA resource adapter. I'm also, as I go, trying to fully understand the connection management portion of the JCA specification. As a thought experiment, pretend that the only client of this adapter will be a Swing Java Application Client located on a different machine. Also assume that the resource adapter will communicate with its "enterprise information system" (EIS) over the network as well. As I understand the JCA specification, the .rar file is deployed to the application server. The application server creates the .rar file's implementation of the ManagedConnectionFactory interface. It then asks it to produce a connection factory, which is the opaque object that is deployed to JNDI for the user to use to obtain a connection to the resource. (In the case of JDBC, the connection factory is a javax.sql.DataSource.) It is a requirement that the connection factory retain a reference to the application-server-supplied ConnectionManager, which, in turn, is required to be Serializable. This makes sense--in order for the connection factory to be stored in JNDI, it must be serializable, and in order for it to keep a reference to the ConnectionManager, the ConnectionManager must also be serializable. So fine, this little object graph gets installed in the application client's JNDI tree. This is where I start to get queasy. Is the ConnectionManager--the piece supplied by the application server that is supposed to handle connection management, sharing, pooling, etc.--wholly present on the client at this point? One of its jobs is to create ManagedConnection instances, and a ManagedConnection is not required to be Serializable, and the user connection handles it vends are also not required to be Serializable. That suggests to me that the whole connection pooling machinery is shipped wholesale to the application client and stuffed into its JNDI tree. Does this all mean that JCA interactions from the client side bypass the server-side componentry of the application server? Where are the network boundaries in the JCA API?

    Read the article

  • Performance Comparison of Shell Scripts vs high level interpreted langs (C#/Java/etc.)

    - by dferraro
    Hi all, First - This is not meant to be a 'which is better, ignorant nonionic war thread'... But rather, I generally need help in making an architecture decision / argument to put forward to my boss. Skipping the details - I simply just would love to know and find the results of anyone who has done some performance comparisons of Shell vs [Insert General Purpose Programming Language (interpreted) here), such as C# or Java... Surprisingly, I have spent some time on Google on searching here to not find any of this data. Has anyone ever done these comparisons, in different use-cases; hitting a database like in a XYX # of loops doing different types of SQL (Oracle pref, but MSSQL would do) queries such as any of the CRUD ops - and also not hitting database and just regular 50k loop type comparison doing different types of calculations, and things of that nature? In particular - for right now, I need to a comparison of hitting an Oracle DB from a shell script vs, lets say C# (again, any GPPL thats interpreted would be fine, even the higher level ones like Python). But I also need to know about standard programming calculations / instructions/etc... Before you ask 'why not just write a quick test yourself? The answer is: I've been a Windows developer my whole life/career and have very limited knowledge of Shell scripting - not to mention *nix as a whole.... So asking the question on here from the more experienced guys would be grealty beneficial, not to mention time saving as we are in near perputual deadline crunch as it is ;). Thanks so much in advance,

    Read the article

  • PHP Form multiple buttons

    - by Ken
    I have a form with 2 buttons, that depending on which is selected will either be deleted or edited from the database. Those are each individual pages using SQL statements (questionedit and questiondelete). However, when i press a button, nothing happens...Any Ideas Here is my javascript <script type="text/javascript"> function SelectedButton(button) { if(button == 'edit') { document.testedit_questionform.action ="testedit_questionedit.php"; } else if(button == 'delete') { document.testedit_questionform.action ="testedit_questiondelete.php"; } document.forms[].testedit_questionform.submit(); } </script> Here is my form (being echoed from a loop) <form name=\"testedit_questionform\" action=\"SelectedButton\" method=\"POST\"> <span class=\"grid_11 prefix_1\" id=\"\" > Question:<input type=\"text\" name=\"QuestionText\" style=\"width:588px; margin-left:10px;\" value=\"$row[0]\"/> <input type=\"button\" value=\"Edit\" name=\"Operation\"onclick=\"submitForm(\'edit\')\" /> <input type=\"button\" value=\"Delete\" name=\"Operation\"onclick=\"submitForm(\'delete\')\" /> <input type=\"hidden\" name=\"QId\" value=\"$row[3]\" /><br />"); </form>

    Read the article

  • How to show useful error messages from a database error callback in Phonegap?

    - by Magnus Smith
    Using Phonegap you can set a function to be called back if the whole database transaction or the individual SQL statement errors. I'd like to know how to get more information about the error. I have one generic error-handling function, and lots of different SELECTs or INSERTs that may trigger it. How can I tell which one was at fault? It is not always obvious from the error message. My code so far is... function get_rows(tx) { tx.executeSql("SELECT * FROM Blah", [], lovely_success, statement_error); } function add_row(tx) { tx.executeSql("INSERT INTO Blah (1, 2, 3)", [], carry_on, statement_error); } function statement_error(tx, error) { alert(error.code + ' / ' + error.message); } From various examples I see the error callback will be passed a transaction object and an error object. I read that .code can have the following values: UNKNOWN_ERR = 0 DATABASE_ERR = 1 VERSION_ERR = 2 TOO_LARGE_ERR = 3 QUOTA_ERR = 4 SYNTAX_ERR = 5 CONSTRAINT_ERR = 6 TIMEOUT_ERR = 7 Are there any other properties/methods of the error object? What are the properties/methods of the transaction object at this point? I can't seem to find a good online reference for this. Certainly not on the Phonegap website!

    Read the article

  • (Rails) Creating multi-dimensional hashes/arrays from a data set...?

    - by humble_coder
    Hi All, I'm having a bit of an issue wrapping my head around something. I'm currently using a hacked version of Gruff in order to accommodate "Scatter Plots". That said, the data is entered in the form of: g.data("Person1",[12,32,34,55,23],[323,43,23,43,22]) ...where the first item is the ENTITY, the second item is X-COORDs, and the third item is Y-COORDs. I currently have a recordset of items from a table with the columns: POINT, VALUE, TIMESTAMP. Due to the "complex" calculations involved I must grab everything using a single query or risk way too much DB activity. That said, I have a list of items for which I need to dynamically collect all data from the recordset into a hash (or array of arrays) for the creation of the data items. I was thinking something like the following: @h={} e = Events.find_by_sql(my_query) e.each do |event| @h["#{event.Point}"][x] = event.timestamp @h["#{event.Point}"][y] = event.value end Obviously that's not the correct syntax, but that's where my brain is going. Could someone clean this up for me or suggest a more appropriate mechanism by which to accomplish this? Basically the main goal is to keep data for each pointname grouped (but remember the recordset has them all). Much appreciated. EDIT 1 g = Gruff::Scatter.new("600x350") g.title = self.name e = Event.find_by_sql(@sql) h ={} e.each do |event| h[event.Point.to_s] ||= {} h[event.Point.to_s].merge!({event.Timestamp.to_i,event.Value}) end h.each do |p| logger.info p[1].values.inspect g.data(p[0],p[1].keys,p[1].values) end g.write(@chart_file)

    Read the article

  • Need a refresher course on property access...

    - by Code Sherpa
    Hi. I need help with accessing class properties within a given class. For example, take the below class: public partial class Account { private Profile _profile; private Email _email; private HostInfo _hostInfo; public Profile Profile { get { return _profile; } set { _profile = value; } } public Email Email { get { return _email; } set { _email = value; } } public HostInfo HostInfo { get { return _hostInfo; } set { _hostInfo = value; } } In the class "Account" exists a bunch of class properties such as Email or Profile. Now, when I want to access those properties at run-time, I do something like this (for Email): _accountRepository = ObjectFactory.GetInstance<IAccountRepository>(); string username = Cryptography.Decrypt(_webContext.UserNameToVerify, "verify"); Account account = _accountRepository.GetAccountByUserName(username); if(account != null) { account.Email.IsConfirmed = true; But, I get "Object reference not set..." for account.Email... Why is that? How do I access Account such that account.Email, account.Profile, and so on returns the correct data for a given AccountId or UserName. Here is a method that returns Account: public Account GetAccountByUserName(string userName) { Account account = null; using (MyDataContext dc = _conn.GetContext()) { try { account = (from a in dc.Accounts where a.UserName == userName select a).FirstOrDefault(); } catch { //oops } } return account; } The above works but when I try: account = (from a in dc.Accounts join em in dc.Emails on a.AccountId equals em.AccountId join p in dc.Profiles on em.AccountId equals p.AccountId where a.UserName == userName select a).FirstOrDefault(); I am still getting object reference exceptions for my Email and Profile properties. Is this simply a SQL problem or is there something else I need to be doing to be able to fully access all the properties within my Account class? Thanks!

    Read the article

  • Hibernate: fetching multiple bags efficiently

    - by Jens Jansson
    Hi! I'm developing a multilingual application. For this reason many objects have in their name and description fields collections of something I call LocalizedStrings instead of plain strings. Every LocalizedString is basically a pair of a locale and a string localized to that locale. Let's take an example an entity, let's say a book -object. public class Book{ @OneToMany private List<LocalizedString> names; @OneToMany private List<LocalizedString> description; //and so on... } When a user asks for a list of books, it does a query to get all the books, fetches the name and description of every book in the locale the user has selected to run the app in, and displays it back to the user. This works but it is a major performance issue. For the moment hibernate makes one query to fetch all the books, and after that it goes through every single object and asks hibernate for the localized strings for that specific object, resulting in a "n+1 select problem". Fetching a list of 50 entities produces about 6000 rows of sql commands in my server log. I tried making the collections eager but that lead me to the "cannot simultaneously fetch multiple bags"-issue. Then I tried setting the fetch strategy on the collections to subselect, hoping that it would do one query for all books, and after that do one query that fetches all LocalizedStrings for all the books. Subselects didn't work in this case how i would have hoped and it basically just did exactly the same as my first case. I'm starting to run out of ideas on how to optimize this. So in short, what fetching strategy alternatives are there when you are fetching a collection and every element in that collection has one or multiple collections in itself, which has to be fetch simultaneously.

    Read the article

  • PHP Foreach statement issue. Multiple rows are returned

    - by Daniel Patilea
    I'm a PHP beginner and lately i've been having a problem with my source code. Here it is: <html> <head> <title> Bot </title> <link type="text/css" rel="stylesheet" href="main.css" /> </head> <body> <form action="bot.php "method="post"> <lable>You:<input type="text" name="intrebare"></lable> <input type="submit" name="introdu" value="Send"> </form> </body> </html> <?php //error_reporting(E_ALL & ~E_NOTICE); mysql_connect("localhost", "root", "") or die(mysql_error()); mysql_select_db("robo") or die(mysql_error()); $intrebare=$_POST['intrebare']; $query = "SELECT * FROM dialog where intrebare like '%$intrebare%'"; $result = mysql_query($query) or die(mysql_error()); $row = mysql_fetch_array($result) or die(mysql_error()); ?> <div id="history"> <?php foreach($row as $rows){ echo "<b>The robot says: </b><br />"; echo $row['raspuns']; } ?> </div> It returns me the result x6 times. This problem appeared when I've made that foreach because I wanted the results to stuck on the page one by one after every sql querry. Can you please tell me what seems to be the problem? Thanks!

    Read the article

  • twitter streaming api instead of search api

    - by user1711576
    I am using twitters search API to view all the tweets that use a particular hashtag I want to view. However, I want to use the stream function, so, I only get recent ones, and so, I can then store them. <?php global $total, $hashtag; $hashtag = $_POST['hash']; $total = 0; function getTweets($hash_tag, $page) { global $total, $hashtag; $url = 'http://search.twitter.com/search.json?q='.urlencode($hash_tag).'&'; $url .= 'page='.$page; $ch = curl_init($url); curl_setopt ($ch, CURLOPT_RETURNTRANSFER, TRUE); $json = curl_exec ($ch); curl_close ($ch); echo "<pre>"; $json_decode = json_decode($json); print_r($json_decode->results); $json_decode = json_decode($json); $total += count($json_decode->results); if($json_decode->next_page){ $temp = explode("&",$json_decode->next_page); $p = explode("=",$temp[0]); getTweets($hashtag,$p[1]); } } getTweets($hashtag,1); echo $total; ?> The above code is what I have been using to search for the tweets I want. What do I need to do to change it so I can stream the tweets instead? I know I would have to use the stream url https://api.twitter.com/1.1/search/tweets.json , but what do I need to change after that is where I don't know what to do. Obviously, I know I'll need to write the database sql but I want to just capture the stream first and view it. How would I do this? Is the code I have been using not any good for just capturing the stream?

    Read the article

  • How To Correctly Specify A Default Value For A String Field In A PHP/MySQL Prepared Statement

    - by Joshua
    I'm trying to debug some auto-generated code, but I am a mySQL noob. Everything goes fine ultil the "prepare" line below, and then for some reason $mysqli_stmt is false, yielding the stated error. Could it have something to do with the SQL_MODE = 'ANSI'? The failure seems to have something to do with the string 'xxx' below, but it still happens no matter what I change it to. This value is meant to be a default value for the TickerDigest field, but strangely if I change 'xxx' to 'c_u_TickerDigest', then it suddenly works, but the TickerDigest field is inserted as 'null' when I look in the database. $mysqli = mysqli_init(); $mysqli->options(MYSQLI_INIT_COMMAND, "SET SQL_MODE = 'ANSI'"); $mysqli->real_connect(SR_Host,SR_Username,SR_Password,SR_Database) or die('Unable to connect to Database'); $sql_stmt = 'INSERT INTO "t_sr_u_Product"("c_u_Name", "c_u_Code", "c_u_TickerDigest") VALUES (?, ?, "xxx")'; $mysqli_stmt = $mysqli->prepare($sql_stmt); Fatal error: Uncaught exception 'Exception' with message 'INSERT INTO "t_sr_u_Product"("c_u_Name", "c_u_Code", "c_u_TickerDigest") VALUES (?, ?, "xxx"): prepare statement failed: Unknown column 'xxx' in 'field list'' in P:\StarRise\SandBox\GateKeeper\Rise\srIProduct.php on line 18 I'm hopeful what's going wrong is fairly simple, since I'm almost completely ignorant about SQL.

    Read the article

  • Hopping from a C++ to a Perl Unix profile?

    - by rocknroll
    Hi all, I have been a C++,Linux Developer till now and I am adept in this stack. Off late I have been getting opportunities that require Perl,Unix (with knowledge of C++,shell scripting) expertise. Organisations are showing interest even thought I don't have much scripting experience to boast off. The roll is more in a Support,maintenance project involving SQL as well. Off late I am in a fix whether to forgo these offers or not. I don't know the dynamics of an IT organisation and thus on one hand I fear that my C++ experience will be nullified and on the positive side I am getting to work on a new technology stack which will only add to my skill set. I am sure, most of you at some point of time have encountered such dilemmas and would have taken some decision. I want you to share your perspectives on such a scenario where a person is required to change his/her technology stack when changing his/her job. What are the merits and demerits in going with either of the choices? Also I know that C++ isn't going anywhere in the near future. What about perl? I have no clue as to what the future holds for perl developer? Whether there are enough opportunities for a perl developer? I am asking this question here because most of my fellow programmers face this career choice dilemma. Thanks.

    Read the article

  • Is using a GUI worse than using bash and other text interface tools?

    - by Glycerine
    As a freelancer, and previous Adobe trainer, I get a look-in at many development workflows and alternate styles of programming and design and therefore quite open to different workflows. But a recent post I had, I needed to use SQL - so I whipped out navicat and wrote a nice join statement. I had sniggering comments and sideways glances when I told the developers I was working with navicat, I preferred a GUI to bash and I prefer Aptana to notepad. I felt a little insulted and under skilled as I didn't want to sit in front of reams of spewing green text. I know and use the tools when required but I prefer more attractive, modern products. Have you guys had this? What do you do to overcome it - apart from using bash and notepad more? How do you subdue a developer who's being arrogant about his skill? And is it my fault? I hope this question is not subjective, I do feel like I am inferior to my peers due to it - so some advise would really help.

    Read the article

  • Enterprise integration of disparate systems

    - by Chris Latta
    We're about to embark on a fairly large integration effort to kill off a bunch of Access and Sql Server databases and get everything into one coherent enterprise system. There are also a number of other systems (accounting, CRM, payroll, MS Exchange) that hold critical data that we need to integrate (use for data validation in other systems), report on and otherwise expose. It is likely that some of these systems will change in the next few years, so we need to isolate our systems to be ready for change. Ideally we would be able to expose our forms in a consistent manner across as many of our our systems as possible without having to re-develop them for each system. We are currently targeting SharePoint (2007 and soon 2010), Office (2007 and soon 2010 - Word, Excel, PowerPoint and Outlook), Reporting Services, .Net console applications, .Net Windows applications, shell extensions, and with the possibility of exposing some functionality on mobile devices (BlackBerries currently, maybe iPhones later) and via our website. We're moving development to Visual Studio 2010 (from 2005) ahead of migrating to SharePoint 2010 and Office 2010. Given that most of our development is presently targeted to the .Net framework (mostly in C#) it seems logical to stick with this unless there is some compelling reason to switch frameworks/platform for some aspects. We're thinking of your standard Database-Data Integration layer-Business Objects Layer-Web Services (or REST) layer-Client Application plus doing our own client application with WPF (or something else?) forms that can also be exposed in the MS systems (SharePoint, Office, Windows). So, we don't want much, just everything :) Basically we need to isolate ourselves from database and systems changes, create an API that can be used throughout our systems and then make this functionality available in our client applications. I'm very keen to get pointers from anyone who has tips on how to pull this off. Should we look at the Enterprise Library as a place to start? Is REST with ASP.Net MVC2 a better solution than Web Services for a system like this? Will WPF deliver forms re-use or is there something better?

    Read the article

< Previous Page | 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091  | Next Page >