Search Results

Search found 9996 results on 400 pages for 'db'.

Page 359/400 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • Element not found blocks execution in Selenium

    - by Mariano
    In my test, I try to verify if certain text exists (after an action) using find_element_by_xpath. If I use the right expression and my test pass, the routine ends correctly in no time. However if I try a wrong text (meaning that the test will fail) it hangs forever and I have to kill the script otherwise it does not end. Here is my test (the expression Thx user, client or password you entered is incorrect does not exist in the system, no matter what the user does): # -*- coding: utf-8 -*- import gettext import unittest from selenium import webdriver class TestWrongLogin(unittest.TestCase): def setUp(self): self.driver = webdriver.Firefox() self.driver.get("http://10.23.1.104:8888/") # let's check the language try: self.lang = self.driver.execute_script("return navigator.language;") self.lang = self.lang("-")[0] except: self.lang = "en" language = gettext.translation('app', '/app/locale', [self.lang], fallback=True) language.install() self._ = gettext.gettext def tearDown(self): self.driver.quit() def test_wrong_client(self): # test wrong client inputElement = self.driver.find_element_by_name("login") inputElement.send_keys("root") inputElement = self.driver.find_element_by_name("client") inputElement.send_keys("Unleash") inputElement = self.driver.find_element_by_name("password") inputElement.send_keys("qwerty") self.driver.find_element_by_name("form.submitted").click() # wait for the db answer self.driver.implicitly_wait(10) ret = self.driver.find_element_by_xpath( "//*[contains(.,'{0}')]".\ format(self._(u"Thx user, client or password you entered is incorrect"))) self.assertTrue(isinstance(ret, webdriver.remote.webelement.WebElement)) if __name__ == '__main__': unittest.main() Why does it do that and how can I prevent it?

    Read the article

  • SQL Compact performance on device

    - by Ben M
    My SQL Compact database is very simple, with just three tables and a single index on one of the tables (the table with 200k rows; the other two have less than a hundred each). The first time the .sdf file is used by my Compact Framework application on the target Windows Mobile device, the system hangs for well over a minute while "something" is done to the database: when deployed, the DB is 17 megabytes, and after this first usage, it balloons to 24 megs. All subsequent usage is pretty fast, so I'm assuming there's some sort of initialization / index building going on during this first usage. I'd rather not subject the user to this delay, so I'm wondering what this initialization process is and whether it can be performed before deployment. For now, I've copied the "initialized" database back to my desktop for use in the setup project, but I'd really like to have a better answer / solution. I've tried "full compact / repair" in the VS Database Properties dialog, but this made no difference. Any ideas? For the record, I should add that the database is only read from by the device application -- no modifications are made by that code.

    Read the article

  • How to set buffer size in client-server app using sockets?

    - by nelly
    First of all i am new to networking so i may say dumb thing in here. Considering a client-server application using sockets(.net with c# if that matters). The client sends some data, the server process it and sends back a string. The client sends some other data, the serve process it, queries the db and sends back several hundreds of items from the database The client sends some other type of data and the server notifies some other clients . My question is how to set the buffer size correctly for reading/writing operation. Should i do something like this: byte[] buff = new byte[client.ReceiveBufferSize] ? I am thinking of something like this: Client sends data to the server(and the server will follow the same pattern) byte[] bytesToSend=new byte[2048] //2048 to be standard for any command send by the client bytes 0..1 ->command type bytes 1..2047 ->command parameters byte[] bytesToReceive=new byte[8]/byte[64]/byte[8192] //switch(command type) But..what is happening when a client is notified by the server without sending data? What is the correct way to accomplish what i am trying to do? Thanks for reading.

    Read the article

  • PHP arrays. There must be a simpler method to do this

    - by RisingSun
    I have this array in php returned from db Array ( [inv_templates] = Array ( [0] = Array ( [inven_subgroup_template_id] = 1 [inven_group] = Wires [inven_subgroup] = CopperWires [inven_template_id] = 1 [inven_template_name] = CopperWires6G [constrained] = 0 [value_constraints] = [accept_range] = 2 - 16 [information] = Measured Manual ) [1] = Array ( [inven_subgroup_template_id] = 1 [inven_group] = Wires [inven_subgroup] = CopperWires [inven_template_id] = 2 [inven_template_name] = CopperWires2G [constrained] = 0 [value_constraints] = [accept_range] = 1 - 7 [information] = Measured by Automated Calipers ) ) ) I need to output this kind of multidimensional stuff Array ( [Wires] = Array ( [inv_group_name] = Wires [inv_subgroups] = Array ( [CopperWires] = Array ( [inv_subgroup_id] = 1 [inv_subgroup_name] = CopperWires [inv_templates] = Array ( [CopperWires6G] = Array ( [inv_name] = CopperWires6G [inv_id] = 1 ) [CopperWires2G] = Array ( [inv_name] = CopperWires2G [inv_id] = 2 ) ) ) ) ) ) I currently do this stuff foreach ($data['inv_templates'] as $key = $value) { $processeddata[$value['inven_group']]['inv_group_name'] = $value['inven_group']; $processeddata[$value['inven_group']]['inv_subgroups'][$value['inven_subgroup']]['inv_subgroup_id'] = $value['inven_subgroup_template_id']; $processeddata[$value['inven_group']]['inv_subgroups'][$value['inven_subgroup']]['inv_subgroup_name'] = $value['inven_subgroup']; $processeddata[$value['inven_group']]['inv_subgroups'][$value['inven_subgroup']]['inv_templates'][$value['inven_template_name']]['inv_name'] = $value['inven_template_name']; $processeddata[$value['inven_group']]['inv_subgroups'][$value['inven_subgroup']]['inv_templates'][$value['inven_template_name']]['inv_id'] = $value['inven_template_id']; } return $processeddata; EDIT : A var_export array ( 'inv_templates' = array ( 0 = array ( 'inven_subgroup_template_id' = '1', 'inven_group' = 'Wires', 'inven_subgroup' = 'CopperWires', 'inven_template_id' = '1', 'inven_template_name' = 'CopperWires6G', 'constrained' = '0', 'value_constraints' = '', 'accept_range' = '2 - 16', 'information' = 'Measured Manual', ), 1 = array ( 'inven_subgroup_template_id' = '1', 'inven_group' = 'Wires', 'inven_subgroup' = 'CopperWires', 'inven_template_id' = '2', 'inven_template_name' = 'CopperWires6G', 'constrained' = '0', 'value_constraints' = '', 'accept_range' = '1 - 7', 'information' = 'Measured by Automated Calipers', ), ), ) The foreach is almost unreadable. There must be a simpler way

    Read the article

  • How to insert several thousand columns into sqlite3?

    - by user291071
    Similar to my last question, but I ran into problem lets say I have a simple dictionary like below but its Big, when I try inserting a big dictionary using the methods below I get operational error for the c.execute(schema) for too many columns so what should be my alternate method to populate an sql databases columns? Using the alter table command and add each one individually? import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() dic = { 'x1':{'y1':1.0,'y2':0.0}, 'x2':{'y1':0.0,'y2':2.0,'joe bla':1.5}, 'x3':{'y2':2.0,'y3 45 etc':1.5} } # 1. Find the unique column names. columns = set() for _, cols in dic.items(): for key, _ in cols.items(): columns.add(key) # 2. Create the schema. col_defs = [ # Start with the column for our key name '"row_name" VARCHAR(2) NOT NULL PRIMARY KEY' ] for column in columns: col_defs.append('"%s" REAL NULL' % column) schema = "CREATE TABLE simple (%s);" % ",".join(col_defs) c.execute(schema) # 3. Loop through each row for row_name, cols in dic.items(): # Compile the data we have for this row. col_names = cols.keys() col_values = [str(val) for val in cols.values()] # Insert it. sql = 'INSERT INTO simple ("row_name", "%s") VALUES ("%s", "%s");' % ( '","'.join(col_names), row_name, '","'.join(col_values) )

    Read the article

  • Can a Snapshot transaction fail and only partially commit in a TransactionScope?

    - by Travis Brooks
    Greetings I stumbled onto a problem today that seems sort of impossible to me, but its happening...I'm calling some database code in c# that looks something like this: using(var tran = MyDataLayer.Transaction()) { MyDataLayer.ExecSproc(new SprocTheFirst(arg1, arg2)); MyDataLayer.CallSomethingThatEventuallyDoesLinqToSql(arg1, argEtc); tran.Commit(); } I've simplified this a bit for posting, but whats going on is MyDataLayer.Transaction() makes a TransactionScope with the IsolationLevel set to Snapshot and TransactionScopeOption set to Required. This code gets called hundreds of times a day, and almost always works perfectly. However after reviewing some data I discovered there are a handful of records created by "SprocTheFirst" but no corresponding data from "CallSomethingThatEventuallyDoesLinqToSql". The only way that records should exist in the tables I'm looking at is from SprocTheFirst, and its only ever called in this one function, so if its called and succeeded then I would expect CallSomethingThatEventuallyDoesLinqToSql would get called and succeed because its all in the same TransactionScope. Its theoretically possible that some other dev mucked around in the DB, but I don't think they have. We also log all exceptions, and I can find nothing unusual happening around the time that the records from SprocTheFirst were created. So, is it possible that a transaction, or more properly a declarative TransactionScope, with Snapshot isolation level can fail somehow and only partially commit?

    Read the article

  • How to insert and call by row and column into sqlite3 python, great tutorial problem.

    - by user291071
    Lets say i have a simple array of x rows and y columns with corresponding values, What is the best method to do 3 things? How to insert, update a value at a specific row column? How to select a value for each row and column, import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() c.execute('''create table simple (links text)''') con.commit() dic = {'x1':{'y1':1.0,'y2':0.0},'x2':{'y1':0.0,'y2':2.0,'y3':1.5},'x3':{'y2':2.0,'y3':1.5}} ucols = {} ## my current thoughts are collect all row values and all column values from dic and populate table row and columns accordingly how to call by row and column i havn't figured out yet ##populate rows in first column for row in dic: print row c.execute("""insert into simple ('links') values ('%s')"""%row) con.commit() ##unique columns for row in dic: print row for col in dic[row]: print col ucols[col]=dic[row][col] ##populate columns for col in ucols: print col c.execute("alter table simple add column '%s' 'float'" % col) con.commit() #functions needed ##insert values into sql by row x and column y?how to do this e.g. x1 and y2 should put in 0.0 ##I tried as follows didn't work for row in dic: for col in dic[row]: val =dic[row][col] c.execute("""update simple SET '%s' = '%f' WHERE 'links'='%s'"""%(col,val,row)) con.commit() ##update value at a specific row x and column y? ## select a value at a specific row x and column y?

    Read the article

  • What's the best approach for getting into VS2010, C# 4, and WPF if my background is in C++/MFC

    - by Canacourse
    All my past programming experience has been in C++ on VS2003/8, Mostly service based and completely self taught. 2 Years ago I had to create my first real GUI app and (Foolishly) choose MFC. I got the app working but it took a long time & was a bit of a nightmare to learn MCF (and its many shortcomings) but I ended up with a reliable workable app which was difficult to change or extend. Again I have to create another GUI app more complex than the first and again this will be created from scratch and will only ever be used on windows. I had put off learning C# for a long time but not wishing to re-visit MFC have decided that the new application with be birthed in VS2010 and WPF 4 will be the midwife. Trying to avoid the several expensive (Time wise) mistakes I made previously. Im looking for for good books/tutorials on the current versions of C# 4 & WPF 4 and also general advice on the best approach. The application will do several things one of them would persisting info in a SQL DB. So Im thinking LINQ for that? Please chip in...

    Read the article

  • XAMPP, MAMP, MySQL, PDO - A deadly combination?

    - by Rich
    Hey folks, Previously I've worked on a Symfony project (MySQL PDO based) with XAMPP, with no problems. Since then, I've moved to MAMP - I prefer this - but have hit a snag with my database connection. I've created a test.php like this: <?php try { $dbh = new PDO('mysql:host=localhost;dbname=xxx;port=8889', 'xxx', 'xxx'); foreach($dbh->query('SELECT * from FOO') as $row) { print_r($row); } $dbh = null; } catch (PDOException $e) { print "Error!: " . $e->getMessage() . "<br/>"; die(); } ?> Obviously the *xxx*s are real db connection details. Which when served by MAMP seems to work fine. From terminal however I keep getting the following error when running the file: Error!: SQLSTATE[28000] [1045] Access denied for user 'xxx'@'localhost' (using password: YES) Not sure if the terminal is aiming at a different MySQL socket or something along those lines; but I've tried pointing it to the MAMP socket with a local php.ini file. Any help would be greatly appreciated.

    Read the article

  • Displaying code with <pre> tags.

    - by iMaster
    Currently I'm using <pre><code> code here </code><pre> to display code. I'm pulling this information from a DB for a blog. The problem I'm having is that some of the code isn't showing. For example, in the source code I have this: <pre><code><br /> echo '<ul class="mylist"><li><ul class="left">'; foreach($nameArray as $name) { if($countervar == $half) { echo '</ul></li>'; echo'<li><ul class="right">'; } echo '<li>$name</li>'; ++$i; } echo '</ul></li>'; echo '</ul>'; ?> But all that shows up is this: echo ''; foreach($nameArray as $name) { if($countervar == $half) { echo ''; echo''; } echo '$name'; ++$i; } echo ' An there's some really weird formatting/spacing issues as well. Any ideas as to what is causing this? I should also mention that some of the other sets of code show up just fine.

    Read the article

  • Performing centralized authorization for multiple applications

    - by Vaibhav
    Here's a question that I have been wrestling with for a while. We have a situation wherein we have a number of applications that we have created. These have grown organically over a period of time. All of these applications have permissions code built into them that controls access to various parts of the application depending on whether the currently logged in user has the necessary permissions or not. Alongside these applications is a utility application which allows an administrator to map users to permissions for all applications - the way it works is that every application has code which reads this external database of the said utility application to check if the currently logged in user has the necessary permission or not. Now, the question is this. Should the user-permissions mapping information reside in and be owned by the applications themselves, or is it okay to have this information reside within an external entity/DB (as in this case the utility application's database). Part of me thinks that application permissions are very specific to the application context itself, so shouldn't be separated from the application itself. But I am not sure. Any comments?

    Read the article

  • Assigning values to the lable through database depending on listbox values

    - by SurajVitekar
    I want to assign the values of five labels from database regarding to values selected in listbox. The db query returns single column with multiple records. Please help. I'm working with C# 2010 and MS SQL. My current code is: private void listBox1_SelectedIndexChanged(object sender, EventArgs e) { try { String c1, c2; c1 = "NULL"; MessageBox.Show("LB index :"+listBox1.SelectedIndex.ToString()); //p = listBox1.SelectedItem.ToString(); SqlConnection con = new SqlConnection(); con.ConnectionString = "Data Source=localhost;Initial Catalog=eVoting;Integrated Security=True;Pooling=False"; con.Open(); MessageBox.Show("List bOx sect :"+listBox1.SelectedValue.ToString()); SqlCommand cmd = new SqlCommand("select Firstname from candidates where position ='" + listBox1.SelectedValue.ToString() + "'", con); int index = 0; SqlDataReader reader = cmd.ExecuteReader(); while(reader.Read()) { if (index == 0) { c1 = reader[index].ToString(); radioButton1.Text = c1; } if (index == 1) { c1 = reader[index].ToString(); radioButton2.Text = c1; } if (index == 2) { c1 = reader[index].ToString(); radioButton3.Text = c1; } if (index == 3) { c1 = reader[index].ToString(); radioButton4.Text = c1; } if (index == 4) { c1 = reader[index].ToString(); radioButton4.Text = c1; } if (index == 5) { c1 = reader[index].ToString(); radioButton5.Text = c1; } MessageBox.Show("c1 :" + c1); index++; } } catch (Exception E) { } }

    Read the article

  • @dynamic property needs setter with multiple behaviors

    - by ambertch
    I have a class that contains multiple user objects and as such has an array of them as an instance variable: NSMutableArray *users; The tricky part is setting it. I am deserializing these objects from a server via Objective Resource, and for backend reasons users can only be returned as a long string of UIDs - what I have locally is a separate dictionary of users keyed to UIDs. Given the string uidString of comma separated UIDs I override the default setter and populate the actual user objects: @dynamic users; - (void)setUsers:(id)uidString { users = [NSMutableArray arrayWithArray: [[User allUsersDictionary] objectsForKeys:[(NSString*)uidString componentsSeparatedByString:@","]]]; } The problem is this: I now serialize these to database using SQLitePO, which stores these as the array of user objects, not the original string. So when I retrieve it from database the setter mistakenly treats this array of user objects as a string! Where I actually want to adjust the setter's behavior when it gets this object from DB vs. over the network. I can't just make the getter serialize back into a string without tearing up large code that reference this array of user objects, and I tried to detect in the setter whether I have a string or an array coming in: if ([uidString respondsToSelector:@selector(addObject)]) { // Already an array, so don't do anything - just assign users = uidString but no success... so I'm kind of stuck - any suggestions? Thanks in advance!

    Read the article

  • Strange befaviour of spring transaction support for JPA + Hibernate +@Transactional annotation

    - by abovesun
    I found out really strange behavior on relatively simple use case, probably I can't understand it because of not deep knowledges of spring @Transactional nature, but this is quite interesting. I have simple User dao that extends spring JpaDaoSupport class and contains standard save method: @Transactional public User save(User user) { getJpaTemplate().persist(user); return user; } If was working fine until I've add new method to same class: User getSuperUser(), this method should return user with isAdmin == true, and if there is no super user in db, method should create one. Thats how it was looking like: public User createSuperUser() { User admin = null; try { admin = (User) getJpaTemplate().execute(new JpaCallback() { public Object doInJpa(EntityManager em) throws PersistenceException { return em.createQuery("select u from UserImpl u where u.admin = true").getSingleResult(); } }); } catch (EmptyResultDataAccessException ex) { User admin = new User('login', 'password'); admin.setAdmin(true); save(admin); // THIS IS THE POINT WHERE STRANGE THING COMING OUT } return admin; } As you see code is strange forward and I was very confused when found out that no transaction was created and committed on invocation of save(admin) method and no new user wasn't actually created despite @Transactional annotation. In result we have situation: when save() method invokes from outside of UserDAO class - @Transactional annotation counted and user successfully created, but if save() invokes from inside of other method of the same dao class - @Transactional annotation ignored. Here how I was change save() method to force it always create transaction. public User save(User user) { getJpaTemplate().execute(new JpaCallback() { public Object doInJpa(EntityManager em) throws PersistenceException { em.getTransaction().begin(); em.persist(user); em.getTransaction().commit(); return null; } }); return user; } As you see I manually invoke begin and commit. Any ideas?

    Read the article

  • Multiple "ObjectChangeTracker" getting created, can it be avoided?

    - by user555937
    Hi, We are working on a POC where we have following architecture (MVVM), WPF(Client) + WCF + Model(DataAccess)+ ADO.Net Entity Framework 4.0 (with SQL Server 2008 R2 as DB) All are different projects. In the DataAccess layer we have created different Entity Models(edmx) based on the functionality. The tables under perticular flow are grouped and created different entity models. We are using self tracking entities to and fro to communicate with the WPF client through wcf service. For Single model everything works fine. But when we created a Multiple models then few issues started coming. Mutliple models have few duplicate tables/entities. Two probels are, 1) When we try to access entities from different models mutiple objects "ObjectChangeTracker" are getting created. E.g. CompanyModel(edmx) - Company(Entity) - ObjectChangeTracker, ObjectState ProductModel(edmx) - Customer(Entity) - ObjectChangeTracker1, ObjectState1 OrderModel(edmx) - Oder(Entity) - ObjectChangeTracker2, ObjectState2 Is there any way to avoid this? 2) There are few tables which shared across the Models, E.g. Company(Entity) is used in All above mdoels. During compile time it does not thow any error. But run time It gives error saying "Schema specified is not valid. Errors: The mapping of CLR type to EDM type is ambiguous because multiple CLR types match the EDM type "Company"".. To resolve this, we renamed the entities with some prefix to make them Unique. Is there any other way we can resolve this without changing the name of the entity in the same assembly? Thanks in advance and appreciate if anyone has approach for these issues. Thanks, Kiran

    Read the article

  • Stored procedure for generic MERGE

    - by GilliVilla
    I have a set of 10 tables in a database (DB1). And there are 10 tables in another database (DB2) with exact same schema on the same SQL Server 2008 R2 database server machine. The 10 tables in DB1 are frequently updated with data. I intend to write a stored procedure that would run once every day for synchronizing the 10 tables in DB1 with DB2. The stored procedure would make use of the MERGE statement. Now, my aim is to make this as generic and parametrized as possible. That is, accommodate for more tables down the line... and accommodate different source and target DB names. Definitely no hard coding is intended. This is my algorithm so far: Have the database names as parameters Have the first query within the stored procedure... result in giving the names of the 10 tables from a lookup table (this can be 10, 20 or whatever) Have a generic MERGE statement that does the sync for each of the above set of tables (based on primary key?) This is where I need more inputs on. What is the best way to achieve this stored procedure? SQL syntax would be helpful.

    Read the article

  • Best way to carry & modify a variable through various instances and functions?

    - by bobsoap
    I'm looking for the "best practice" way to achieve a message / notification system. I'm using an OOP-based approach for the script and would like to do something along the lines of this: if(!$something) $messages->add('Something doesn\'t exist!'); The add() method in the messages class looks somewhat like this: class messages { public function add($new) { $messages = $THIS_IS_WHAT_IM_LOOKING_FOR; //array $messages[] = $new; $THIS_IS_WHAT_IM_LOOKING_FOR = $messages; } } In the end, there is a method in which reads out $messages and returns every message as nicely formatted HTML. So the questions is - what type of variable should I be using for $THIS_IS_WHAT_IM_LOOKING_FOR? I don't want to make this use the database. Querying the db every time just for some messages that occur at runtime and disappear after 5 seconds just seems like overkill. Using global constants for this is apparently worst practice, since constants are not meant to be variables that change over time. I don't even know if it would work. I don't want to always pass in and return the existing $messages array through the method every time I want to add a new message. I even tried using a session var for this, but that is obviously not suited for this purpose at all (it will always be 1 pageload too late). Any suggestions? Thanks!

    Read the article

  • Map inheritance from generic class in Linq To SQL

    - by Ksenia Mukhortova
    Hi everyone, I'm trying to map my inheritance hierarchy to DB using Linq to SQL: Inheritance is like this, classes are POCO, without any LINQ to SQL attributes: public interface IStage { ... } public abstract class SimpleStage<T> : IStage where T : Process { ... } public class ConcreteStage : SimpleStage<ConcreteProcess> { ... } Here is the mapping: <Database Name="NNN" xmlns="http://schemas.microsoft.com/linqtosql/mapping/2007"> <Table Name="dbo.Stage" Member="Stage"> <Type Name="BusinessLogic.Domain.IStage"> <Column Name="ID" Member="ID" DbType="Int NOT NULL IDENTITY" IsPrimaryKey="true" IsDbGenerated="true" AutoSync="OnInsert" /> <Column Name="StageType" Member="StageType" IsDiscriminator="true" /> <Type Name="BusinessLogic.Domain.SimpleStage" IsInheritanceDefault="true"> <Type Name="BusinessLogic.Domain.ConcreteStage" IsInheritanceDefault="true" InheritanceCode="1"/> </Type> </Type> </Table> </Database> In the runtime I get error: System.InvalidOperationException was unhandled Message="Mapping Problem: Cannot find runtime type for type mapping 'BusinessLogic.Domain.SimpleStage'." Neither specifying SimpleStage, nor SimpleStage<T> in mapping file helps - runtime keeps producing different types of errors. DC is created like this: StreamReader sr = new StreamReader(@"MappingFile.map"); XmlMappingSource mapping = XmlMappingSource.FromStream(sr.BaseStream); DataContext dc = new DataContext(@"connection string", mapping); If Linq to SQL doesn't support this, could you, please, advise some other ORM, which does. Thanks in advance, Regards! Ksenia

    Read the article

  • Does breaking chained Select()s in LINQ to objects hurt performance?

    - by Justin
    Take the following pseudo C# code: using System; using System.Data; using System.Linq; using System.Collections.Generic; public IEnumerable<IDataRecord> GetRecords(string sql) { // DB logic goes here } public IEnumerable<IEmployer> Employers() { string sql = "select EmployerID from employer"; var ids = GetRecords(sql).Select(record => (record["EmployerID"] as int?) ?? 0); return ids.Select(employerID => new Employer(employerID) as IEmployer); } Would it be faster if the two Select() calls were combined? Is there an extra iteration in the code above? Is the following code faster? public IEnumerable<IEmployer> Employers() { string sql = "select EmployerID from employer"; return Query.Records(sql).Select(record => new Employer((record["EmployerID"] as int?) ?? 0) as IEmployer); } I think the first example is more readable if there is no difference in performance.

    Read the article

  • Is it possible to return a list of numbers from a Sybase function?

    - by ps_rs4
    I'm trying to overcome a very serious performance issue in which Sybase refuses to use the primary key index on a large table because one of the required fields is specified indirectly through another table - or, in other words; SELECT ... FROM BIGTABLE WHERE KFIELD = 123 runs in ms but SELECT ... FROM BIGTABLE, LTLTBL WHERE KFIELD = LTLTBL.LOOKUP AND LTLTBL.UNIQUEID = 'STRINGREPOF123' takes 30 - 40 seconds. I've managed to work around this first problem by using a function that basically lets me do this; SELECT ... FROM BIGTABLE WHERE KFIELD = MYFUNC('STRINGREPOF123') which also runs in ms. The problem, however, is that this approach only works when there is a single value returned by MYFUNCT but I have some cases where it may return 2 or 3 values. I know that the SQL SELECT ... FROM BIGTABLE WHERE KFIELD IN (123,456,789) also returns in millis so I'd like to have a function that returns a list of possible values rather than just a single one - is this possible? Sadly the application is running on Sybase ASA 9. Yes I know it is old and is scheduled to be refreshed but there's nothing I can do about it now so I need logic that will work with this version of the DB. Thanks in advance for any assistance on this matter.

    Read the article

  • Configure nHibernate for multiple-project solution

    - by NoOne
    Hello, Im doing a project with C# winforms. This project is composed by: Client project: Windows Forms where user will do CRUD operations on the models; Server project; Common Project: This project will hold the models (in the image only have the model Item); ListSingleton: Remote Object that will do the operations on the models; I already have all the communication working, but now I need to work on the persistence of the data in a mysql database. I was trying to use nHibernate but I’m having some troubles. My main problem is how to organize my hibernate configuration. In which project do I keep the mapping? Common project? In which project do I keep the hibernate configuration file (App.config)? ListSingleton project? In which project do I do this: Configuration cfg = new Configuration(); cfg.AddXmlFile("Item.hbm.xml"); ISessionFactory factory = cfg.BuildSessionFactory(); ISession session = factory.OpenSession(); ITransaction transaction = session.BeginTransaction(); Item newItem = new Item("BLAA"); // Tell NHibernate that this object should be saved session.Save(newItem); // commit all of the changes to the DB and close the ISession transaction.Commit(); session.Close(); In the ListSingleton project? Altho I had reference to the Common Project in the ListSingleton I keep getting error in the addXml line… My mapping is correct because I tried with a one-project solution and it worked :X

    Read the article

  • Django Many-to-Many Question

    - by DZ
    My questions seems like a common problem that when I have seen any questions on it is never really asked right or not answered. So Im going to try to get the question right, and maybe someone knows how to resolve the issue, or correct my understanding. The problem: When you have a many-to-many relation ship (related_name not through) and you are trying to use the admin interface you are required to input one of the rleationships even though it does not have to exsist for you to create the first entry. Meaning you have to assign a group to an event to create the group. Wow that sounds complicated. So I can see why the question is not getting answered. Lets try the non code explanation example... First and important versions: Django 1.1.1 Phython 2.6 So I have a model where I created a many-to-many realtionship and Im using the related_name Im creating an app that is an event organizer, for simplicty lets say events although they could be anytype). For this first post Im going to stay away from the code and just try to explain. A few keys: (explaining comment) ** - many-to-many So in the model we have 1) The Main Event (this is main model) 2) Groups (link to events and their can be many events for a group) a) Events** I have simplified this example a little becuase I recognize that what does it matter. Just create the event first... But there are specific varations where that will not work. What the many-to-many related_name does it created another table with the indecies of the two other tables. Nothing says that this extra table HAS to be populated. Becuase if I look in the database and work within myPHPadmin I can create a group with out registering an event, since the connection between the two is a seperate table the DB does not care. How do I make the admin interface this realize it? Ok I know thats a lot so I hope I have explained it clearly. Thank you anyone for your comments/thoughts/advice

    Read the article

  • ruby on rails implement search with auto complete

    - by user429400
    I've implemented a search box that searches the "Illnesses" table and the "symptoms" table in my DB. Now I want to add auto-complete to the search box. I've created a new controller called "auto_complete_controller" which returns the auto complete data. I'm just not sure how to combine the search functionality and the auto complete functionality: I want the "index" action in my search controller to return the search results, and the "index" action in my auto_complete controller to return the auto_complete data. Please guide me how to fix my html syntax and what to write in the js.coffee file. I'm using rails 3.x with the jquery UI for auto-complete, I prefer a server side solution, and this is my current code: main_page/index.html.erb: <p> <b>Syptoms / Illnesses</b> <%= form_tag search_path, :method => 'get' do %> <p> <%= text_field_tag :search, params[:search] %> <br/> <%= submit_tag "Search", :name => nil %> </p> <% end %> </p> auto_complete_controller.rb: class AutoCompleteController < ApplicationController def index @results = Illness.order(:name).where("name like ?", "%#{params[:term]}%") + Symptom.order(:name).where("name like ?", "%#{params[:term]}%") render json: @results.map(&:name) end end search_controller.rb: class SearchController < ApplicationController def index @results = Illness.search(params[:search]) + Symptom.search(params[:search]) respond_to do |format| format.html # index.html.erb format.json { render json: @results } end end end Thanks, Li

    Read the article

  • Return REF CURSOR to procedure generated data

    - by ThaDon
    I need to write a sproc which performs some INSERTs on a table, and compile a list of "statuses" for each row based on how well the INSERT went. Each row will be inserted within a loop, the loop iterates over a cursor that supplies some values for the INSERT statement. What I need to return is a resultset which looks like this: FIELDS_FROM_ROW_BEING_INSERTED.., STATUS VARCHAR2 The STATUS is determined by how the INSERT went. For instance, if the INSERT caused a DUP_VAL_ON_INDEX exception indicating there was a duplicate row, I'd set the STATUS to "Dupe". If all went well, I'd set it to "SUCCESS" and proceed to the next row. By the end of it all, I'd have a resultset of N rows, where N is the number of insert statements performed and each row contains some identifying info for the row being inserted, along with the "STATUS" of the insertion Since there is no table in my DB to store the values I'd like to pass back to the user, I'm wondering how I can return the info back? Temporary table? Seems in Oracle temporary tables are "global", not sure I would want a global table, are there any temporary tables that get dropped after a session is done?

    Read the article

  • MySql ODBC connection in VB6 on WinXP VERY slow. Other machines on same network are fast.

    - by Matthew
    Hi All, I have a VB6 application that has been performing very well. Recently, we upgraded our server to a Windows 2003 server. Migration of the databases and shares went well and we experienced no problems. Except one. And it has happened at multiple sites. I use the MySQL ODBC 5.1 connector to point to my MySQL database. On identical machines (as far as I can tell, they are client machines not ours), access to the DB is lightning fast on all but one computer. They use the same software and have the same connection strings. And I'm sure it's not the program, but the ODBC connection. When I press the 'Test Connection' button in the ODBC connection string window, it can take up to 10 seconds on the poorly performing machine to respond with a success. All the other computers are instantaneous. I have tried using ip address versus the machine name in the UDL, no change. I enabled option 256, which sped it up initially, but it's slow again. Most of the time on a restart the program will be fast for an hour or so then go slow again with the option 256 enabled. Frankly, I am out of ideas and willing to entertain any and all ideas or suggestions. This is getting pretty frustrating. Anyone ever experience anything like this?

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >