Search Results

Search found 16059 results on 643 pages for 'global temp tables'.

Page 481/643 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • How do I find the user that has both a cat and a dog?

    - by brad
    I want to do a search across 2 tables that have a many-to-one relationship, eg class User << ActiveRecord::Base has_many :pets end class Pet << ActiveRecord::Base belongs_to :users end Now let's say I have some data like so users id name 1 Bob 2 Joe 3 Brian pets id user_id animal 1 1 cat 2 1 dog 3 2 cat 4 3 dog What I want to do is create an active record query that will return a user that has both a cat and a dog (i.e. user 1 - Bob). My attempt at this so far is User.joins(:pets).where('pets.animal = ? AND pets.animal = ?','dog','cat') Now I understand why this doesn't work - it's looking for a pet that is both a dog and a cat so returns nothing. I don't know how to modify this to give me the answer I want however. Does anyone have any suggestions? This seems like it should be easy - it doesn't seem like an especially unusual situation.

    Read the article

  • Composite Primary and Cardinality

    - by srini.venigalla
    I have some questions on Composite Primary Keys and the cardinality of the columns. I searched the web, but did not find any definitive answer, so I am trying again. The questions are: Context: Large (50M - 500M rows) OLAP Prep tables, not NOSQL, not Columnar. MySQL and DB2 1) Does the order of keys in a PK matter? 2) If the cardinality of the columns varies heavily, which should be used first. For example, if I have CLIENT/CAMPAIGN/PROGRAM where CLIENT is highly cardinal, CAMPAIGN is moderate, PROGRAM is almost like a bitmap index, what order is the best? 3) What order is the best for Join, if there is a Where clause and when there is no Where Clause (for views) Thanks in advance.

    Read the article

  • Huge page buffer vs. multiple simultaneous processes

    - by Andrei K.
    One of our customer has a 35 Gb database with average active connections count about 70-80. Some tables in database have more than 10M records per table. Now they have bought new server: 4 * 6 Core = 24 Cores CPU, 48 Gb RAM, 2 RAID controllers 256 Mb cache, with 8 SAS 15K HDD on each. 64bit OS. I'm wondering, what would be a fastest configuration: 1) FB 2.5 SuperServer with huge buffer 8192 * 3500000 pages = 29 Gb or 2) FB 2.5 Classic with small buffer of 1000 pages. Maybe some one has tested such case before and will save me days of work :) Thanks in advance.

    Read the article

  • How to add condition on multiple-join table

    - by Jean-Philippe
    Hi, I have those two tables: client: id (int) #PK name (varchar) client_category: id (int) #PK client_id (int) category (int) Let's say I have those datas: client: {(1, "JP"), (2, "Simon")} client_category: {(1, 1, 1), (2, 1, 2), (3, 1, 3), (4,2,2)} tl;dr client #1 has category 1, 2, 3 and client #2 has only category 2 I am trying to build a query that would allow me to search multiple categories. For example, I would like to search every clients that has at least category 1 and 2 (would return client #1). How can I achieve that? Thanks!

    Read the article

  • VendInvoiceJour.InvoiceAccount <- VendTable.AccountNum relation

    - by vukis
    Hi. I have following situation: I need to join VendInvoiceJour.InvoiceAccount <- VendTable.AccountNum and take VendTable.Vendgroup. In all cases (queries,or even views) Dynamics ax joins tables VendInvoiceJour.OrderAccount<- VendTable.AccountNum not VendInvoiceJour.InvoiceAccount <- VendTable.AccountNum. I`m trying to use this kind of query: qBdSVendJour = element.query().dataSourceTable(tablenum(VendInvoiceJour)); qBdSVendTbl = qBdSVendJour.addDataSource(tablenum(VendTable)); qBdSVendTbl.relations(true); qBdSVendTbl.joinMode(JoinMOde::InnerJoin); qBdSVendTbl.fetchMode(QueryFetchMode::One2One); qBdSVendTbl.addLink(FieldNum(VendInvoiceJour,InvoiceAccount),FieldNum(VendTable,AccountNum));//(Dynamics ax automaticaly corrects InvoiceAccount to orderaccount in reports if trying this link in morphx)

    Read the article

  • cakephp find with ID inArray

    - by user331321
    Hi! i have this tables: Clients (id, name, addrees, group,...) Services_Clients (id, client_id, login,passwd) documents (id,client_id,date,path) clients must be in a group or not, and i can not change the database structure. 1) In a login form, by user/passwd i get an client group 2) after that a get all the clients id from that group like this $jur = $this-Client-find('all', array('conditions' = array('Client.group'=$group['Client']['group']))); ok, now, i need to get all documents from the clients of that group, so... how can i achive that? i need to find in my model but getting only with IDs on $jur variable sorry about my english...

    Read the article

  • access custom group

    - by Carlos
    I have my Access 2007 database configured to use "Custom" groups in the navigation pane. I've grouped all my tables in a way that makes sense. However, whenever I update a link table, it loses its grouping. I have not been able to find a way to avoid this. Since it seems to be unavoidable, I'd like to simply have a macro that adds the table back to the right group programatically. I have not found any examples on how to do this. Any suggestions?

    Read the article

  • Little more help with writing a o buffer with libjpeg

    - by Richard Knop
    So I have managed to find another question discussing how to use the libjpeg to compress an image to jpeg. I have found this code which is supposed to work: Compressing IplImage to JPEG using libjpeg in OpenCV Here's the code (it compiles ok): /* This a custom destination manager for jpeglib that enables the use of memory to memory compression. See IJG documentation for details. */ typedef struct { struct jpeg_destination_mgr pub; /* base class */ JOCTET* buffer; /* buffer start address */ int bufsize; /* size of buffer */ size_t datasize; /* final size of compressed data */ int* outsize; /* user pointer to datasize */ int errcount; /* counts up write errors due to buffer overruns */ } memory_destination_mgr; typedef memory_destination_mgr* mem_dest_ptr; /* ------------------------------------------------------------- */ /* MEMORY DESTINATION INTERFACE METHODS */ /* ------------------------------------------------------------- */ /* This function is called by the library before any data gets written */ METHODDEF(void) init_destination (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->pub.next_output_byte = dest->buffer; /* set destination buffer */ dest->pub.free_in_buffer = dest->bufsize; /* input buffer size */ dest->datasize = 0; /* reset output size */ dest->errcount = 0; /* reset error count */ } /* This function is called by the library if the buffer fills up I just reset destination pointer and buffer size here. Note that this behavior, while preventing seg faults will lead to invalid output streams as data is over- written. */ METHODDEF(boolean) empty_output_buffer (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->pub.next_output_byte = dest->buffer; dest->pub.free_in_buffer = dest->bufsize; ++dest->errcount; /* need to increase error count */ return TRUE; } /* Usually the library wants to flush output here. I will calculate output buffer size here. Note that results become incorrect, once empty_output_buffer was called. This situation is notified by errcount. */ METHODDEF(void) term_destination (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->datasize = dest->bufsize - dest->pub.free_in_buffer; if (dest->outsize) *dest->outsize += (int)dest->datasize; } /* Override the default destination manager initialization provided by jpeglib. Since we want to use memory-to-memory compression, we need to use our own destination manager. */ GLOBAL(void) jpeg_memory_dest (j_compress_ptr cinfo, JOCTET* buffer, int bufsize, int* outsize) { mem_dest_ptr dest; /* first call for this instance - need to setup */ if (cinfo->dest == 0) { cinfo->dest = (struct jpeg_destination_mgr *) (*cinfo->mem->alloc_small) ((j_common_ptr) cinfo, JPOOL_PERMANENT, sizeof (memory_destination_mgr)); } dest = (mem_dest_ptr) cinfo->dest; dest->bufsize = bufsize; dest->buffer = buffer; dest->outsize = outsize; /* set method callbacks */ dest->pub.init_destination = init_destination; dest->pub.empty_output_buffer = empty_output_buffer; dest->pub.term_destination = term_destination; } /* ------------------------------------------------------------- */ /* MEMORY SOURCE INTERFACE METHODS */ /* ------------------------------------------------------------- */ /* Called before data is read */ METHODDEF(void) init_source (j_decompress_ptr dinfo) { /* nothing to do here, really. I mean. I'm not lazy or something, but... we're actually through here. */ } /* Called if the decoder wants some bytes that we cannot provide... */ METHODDEF(boolean) fill_input_buffer (j_decompress_ptr dinfo) { /* we can't do anything about this. This might happen if the provided buffer is either invalid with regards to its content or just a to small bufsize has been given. */ /* fail. */ return FALSE; } /* From IJG docs: "it's not clear that being smart is worth much trouble" So I save myself some trouble by ignoring this bit. */ METHODDEF(void) skip_input_data (j_decompress_ptr dinfo, INT32 num_bytes) { /* There might be more data to skip than available in buffer. This clearly is an error, so screw this mess. */ if ((size_t)num_bytes > dinfo->src->bytes_in_buffer) { dinfo->src->next_input_byte = 0; /* no buffer byte */ dinfo->src->bytes_in_buffer = 0; /* no input left */ } else { dinfo->src->next_input_byte += num_bytes; dinfo->src->bytes_in_buffer -= num_bytes; } } /* Finished with decompression */ METHODDEF(void) term_source (j_decompress_ptr dinfo) { /* Again. Absolute laziness. Nothing to do here. Boring. */ } GLOBAL(void) jpeg_memory_src (j_decompress_ptr dinfo, unsigned char* buffer, size_t size) { struct jpeg_source_mgr* src; /* first call for this instance - need to setup */ if (dinfo->src == 0) { dinfo->src = (struct jpeg_source_mgr *) (*dinfo->mem->alloc_small) ((j_common_ptr) dinfo, JPOOL_PERMANENT, sizeof (struct jpeg_source_mgr)); } src = dinfo->src; src->next_input_byte = buffer; src->bytes_in_buffer = size; src->init_source = init_source; src->fill_input_buffer = fill_input_buffer; src->skip_input_data = skip_input_data; src->term_source = term_source; /* IJG recommend to use their function - as I don't know **** about how to do better, I follow this recommendation */ src->resync_to_restart = jpeg_resync_to_restart; } All I need to do is replace the jpeg_stdio_dest in my program with this code: int numBytes = 0; //size of jpeg after compression char * storage = new char[150000]; //storage buffer JOCTET *jpgbuff = (JOCTET*)storage; //JOCTET pointer to buffer jpeg_memory_dest(&cinfo,jpgbuff,150000,&numBytes); So I need some help to incorporate the above four lines into this function which now works but writes to a file instead of a memory: int write_jpeg_file( char *filename ) { struct jpeg_compress_struct cinfo; struct jpeg_error_mgr jerr; /* this is a pointer to one row of image data */ JSAMPROW row_pointer[1]; FILE *outfile = fopen( filename, "wb" ); if ( !outfile ) { printf("Error opening output jpeg file %s\n!", filename ); return -1; } cinfo.err = jpeg_std_error( &jerr ); jpeg_create_compress(&cinfo); jpeg_stdio_dest(&cinfo, outfile); /* Setting the parameters of the output file here */ cinfo.image_width = width; cinfo.image_height = height; cinfo.input_components = bytes_per_pixel; cinfo.in_color_space = color_space; /* default compression parameters, we shouldn't be worried about these */ jpeg_set_defaults( &cinfo ); /* Now do the compression .. */ jpeg_start_compress( &cinfo, TRUE ); /* like reading a file, this time write one row at a time */ while( cinfo.next_scanline < cinfo.image_height ) { row_pointer[0] = &raw_image[ cinfo.next_scanline * cinfo.image_width * cinfo.input_components]; jpeg_write_scanlines( &cinfo, row_pointer, 1 ); } /* similar to read file, clean up after we're done compressing */ jpeg_finish_compress( &cinfo ); jpeg_destroy_compress( &cinfo ); fclose( outfile ); /* success code is 1! */ return 1; } Anybody could help me out a bit with it? I've tried meddling with it but I am not sure how to do it. I I just replace this line: jpeg_stdio_dest(&cinfo, outfile); It's not going to work. There is more stuff that needs to be changed a bit in that function and I am being a little lost from all those pointers and memory management.

    Read the article

  • mysql eliminate responses under certain condition with join

    - by Dustin
    Forgive me if this is an easy question. I teach classes and want to be able to select those students who have taken one class, but not another class. I have two tables: lessons_slots which is the table for every class such as: -------------------- -ID name slots- -1 basics 10 - -2 advanced 10 - -3 basics 10 - --------------------- The other table is class_roll, which holds enrollment info, such as: -------------------- -sID classid firstname lastname- -1 1 Jo Schmo -2 1 Person Two ... -13 2 Jo Schmo --------------------- What I want to do, I select everyone who has not had the advanced class (for example). I've tried doing SELECT * FROM lessons_slots LEFT JOIN class_roll ON lessons_slots.ID = class_roll.classid WHERE lessons_slots.name != 'advanced' But that doesn't work. Any ideas?

    Read the article

  • Filtering coluns in SQL Server replication - how?

    - by truthseeker
    Hi, I need to replicate some data from two tables in one database to another databases. I used snapshot replication. The issue is that I would like to replicate only some selected columns and the others should stay with untouched data. I don't want to loose their data. The sours of those columns is other system. So I need to replicate only data from my columns. Do anybody know how to achieve this?

    Read the article

  • MySQL SELECT results from 1 table, but exclude results depending on another table?

    - by Brandon
    Hey, What SQL query would I have to use if I want to get the results from a table 'messages' but exclude rows that have the value in 'messages_view' where field messages.message=messages_view.id AND messages.deleted=1 AND messages_view.user=$somephpvariable In more laymen terms, I have a messages table with each message denoted by an 'id' as well as a messages_view table connected with a 'message' field. I want to get the rows in message that are not deleted (comes from messages_view) for a specific 'user'. 'deleted'=1 when the message is deleted. Here is my current SQL Query that just gets the values out of : SELECT * FROM messages WHERE ((m_to=$user_id) OR (m_to=0 AND (m_to_state='' OR m_to_state='$state') AND (m_to_city='' OR m_to_city='$city'))) Here is the layout of my tables: table: messages ---------------------------- id (INT) (auto increment) m_from (INT) <-- Represents a user id (0 = site admin) m_to (INT) <-- Represents a user id (0 = all users) m_to_state (VARCHAR) m_to_city (VARCHAR) table: messages_view ---------------------------- message (INT) <-- Corresponds to messages.id above user (INT) <-- Represents a user id deleted (INT) <-- 1 = deleted

    Read the article

  • Best method for Binding ComboBox

    - by LnDCobra
    I am going to be developing a large project which will include a large number of ComboBoxes. Most of these combo boxes will be bound to a database field which is a related to another daataset/table. For instance. I have the following 2 tables: Company {CompanyID, CompanyName, MainContact} Contacts {ContactID, ContactName} And when the user clicks to edit a company, A TextBox will be there to edit a company name, but also a ComboBox will be there. The way I am currently doing it is binding the ComboBox to the Contacts dataset, and manually updating the Company MainContact field in code behind. Is there anyway for me to bind the selected item to the Company MainContact field in XAML and the items to the ContactName and eliminate the code behind? Reason for this is when you start making 100's combo boxes all over the application it gets long winded each time creating code behind to do the update.

    Read the article

  • Optimising (My)SQL Query

    - by Simon
    I usually use ORM instead of SQL and I am slightly out of touch on the different JOINs... SELECT `order_invoice`.*, `client`.*, `order_product`.*, SUM(product.cost) as net FROM `order_invoice` LEFT JOIN `client` ON order_invoice.client_id = client.client_id LEFT JOIN `order_product` ON order_invoice.invoice_id = order_product.invoice_id LEFT JOIN `product` ON order_product.product_id = product.product_id WHERE (order_invoice.date_created >= '2009-01-01') AND (order_invoice.date_created <= '2009-02-01') GROUP BY `order_invoice`.`invoice_id` The tables/ columns are logically names... it's an shop type application... the query works... it's just very very slow... I use the Zend Framework and would usually use Zend_Db_Table_Row::find(Parent|Dependent)Row(set)('TableClass') but I have to make lots of joins and I thought it'll improve performance by doing it all in one query instead of hundreds... Can I improve the above query by using more appropriate JOINs or a different implementation? Many thanks.

    Read the article

  • Contents after &amp; cannot be retrieved by xml parsing in iphone?

    - by Warrior
    I am new to iphone development.I am parsing a You tube rss feed to display its content in the table view. While retrieving the content of link tag, i am able to retrieve only half of the url and the string after & cannot be retrieved. I want to retrieve the full url so that i can use the url to load in a webview. How can i retrieve the full url ? Please help me out.The parsing is done same as Lazy tables . The orginal url is "http:www.youtube.xxxxxxxx&amp;xxxxxgdata"; The retrieved Url is "http:www.youtube.xxxxxxxx" Thanks.

    Read the article

  • Login fails after upgrade to ASP.net 4.0 from 3.5

    - by lomac
    I cannot log in using any of the membership accounts using .net 4.0 version of the app. It fails like it's the wrong password, and FailedPasswordAttemptCount is incremented in my_aspnet_membership table. (I am using membership with mysql membership provider.) I can create new users. They appear in the database. But I cannot log in using the new user credentials (yes, IsApproved is 1). One clue is that the hashed passwords in the database is longer for the users created using the asp.net 4.0 version, e.g 3lwRden4e4Cm+cWVY/spa8oC3XGiKyQ2UWs5fxQ5l7g=, and the old .net 3.5 ones are all like +JQf1EcttK+3fZiFpbBANKVa92c=. I can still log in when connecting to the same db with the .net 3.5 version, but only to the old accounts, not the new ones created with the .net 4.0 version. The 4.0 version cannot log in to any accounts. I tried dropping the whole database on my test system, the membership tables are then auto created on first run, but it's still the same, can create users, but can't log in.

    Read the article

  • How do I make a sql query where fields are the result of a different query?

    - by CRP
    I have two tables, the first is like this: f1 | f2 | f3 | f4 ----------------- data.... the second contains info about the fields of the first: field | info ------------ f1 a f2 b f3 a etc. I would like to query the first table selecting the fields with a query on the second. So, for example, I might want to get data for fields where info is equal to "a", thus I would do "select f1, f3 from first_table". How do I do this programmatically? I was thinking about something along the lines of select (select fields from second_table where info='a') from first_table Thanks Chris

    Read the article

  • SQL Server Delete - Froregin Key

    - by Ahmet Altun
    I have got two tables in Sql Server 2005: USER Table: information about user and so on. COUNTRY Table : Holds list of whole countries on the world. USER_COUNTRY Table: Which matches, which user has visited which county. It holds, UserID and CountryID. For example, USER_COUNTRY table looks like this: ID -- UserID -- CountryID 1 -- 1 -- 34 2 -- 1 -- 5 3 -- 2 -- 17 4 -- 2 -- 12 5 -- 2 -- 21 6 -- 3 -- 19 My question is that: When a user is deleted in USER table, how can I make associated records in USER_COUNTRY table deleted directly. Maybe, by using Foreign Key Constaint?

    Read the article

  • How does one SELECT block another?

    - by Krip
    I'm looking at output of SP_WhoIsActive on SQL Server 2005, and it's telling me one session is blocking another - fine. However they both are running a SELECT. How does one SELECT block another? Shouldn't they both be acquiring shared locks (which are compatible with one another)? Some more details: Neither session has an open transaction count - so they are stand-alone. The queries join a view with a table. They are complex queries which join lots of tables and results in 10,000 or so reads. Any insight much appreciated.

    Read the article

  • How to Import Excel Data into Silverlight App for Visualization?

    - by Ulf
    Hi there, Im building an Silverlight Application (Silverlight 4, Visual Studio 2010), in which the user can generate Charts (line-Charts, Bar Chart) dynamically, by entering a specific time period. At the Moment i have no idea how to import the data to Silverlight, to generate the Charts. My data is stored in 4 Excel Tables and i have no clue what would be the best way to get that data into Silverlight? I read a lot of examples using SQL Server as Database, but unfortunatly SQL Server is no choice for me. Any help would be great!

    Read the article

  • Average Rating script

    - by MILESMIBALERR
    I have asked this once before but i didnt get a very clear answer. I need to know how to make a rating script for a site. I have a form that submits a rating out of ten to mysql. How would you get the average rating to be displayed from the mysql column using php? One person suggested having two tables; one for all the ratings, and one for the average rating of each page. Is there a simpler method than this?

    Read the article

  • PHP/MySQL: Storing and retrieving UUIDS

    - by Greg
    I'm trying to add UUIDs to a couple of tables, but I'm not sure what the best way to store/retrieve these would be. I understand it's far more efficient to use BINARY(16) instead of VARCHAR(36). After doing a bit of research, I also found that you can convert a UUID string to binary with: UNHEX(REPLACE(UUID(),'-','')) Pardon my ignorance, but is there an easy way to this with PHP and then turn it back to a string, when needed, for readability? Also, would it make much difference if I used this as a primary key instead of auto_increment? EDIT: Found part of the answer: $bin = pack("h*", str_replace('-', '', $guid)); How would you unpack it?

    Read the article

  • How to properly reserve identity values for usage in a database?

    - by esac
    We have some code in which we need to maintain our own identity (PK) column in SQL. We have a table in which we bulk insert data, but we add data to related tables before the bulk insert is done, thus we can not use an IDENTITY column and find out the value up front. The current code is selecting the MAX value of the field and incrementing it by 1. Although there is a highly unlikely chance that two instances of our application will be running at the same time, it is still not thread-safe (not to mention that it goes to the database everytime). I am using the ADO.net entity model. How would I go about 'reserving' a range of id's to use, and when that range runs out, grab a new block to use, and guarantee that the same range will not be used.

    Read the article

  • MySQL foreign key constraints, cascade delete

    - by Cudos
    Hello. I want to use foreign keys to keep the integrity and avoid orphans (I already use innoDB). How do I make a SQL statment that DELETE ON CASCADE? Secondly, that using DELETE ON CASCADE. E.g. if I delete a category then it would delete products related to that category even though there are other categories related to those products. The pivot table "categories_products" creates a many-to-many relationship between the two other tables. categories - id (INT) - name (VARCHAR 255) products - id - name - price categories_products - categories_id - products_id

    Read the article

  • Cannot add an entity that already exists. (LINQ to SQL)

    - by Vicheanak
    Hello guys, in my database there are 3 tables CustomerType CusID EventType EventTypeID CustomerEventType CusID EventTypeID Dim db = new CustomerEventDataContext Dim newEvent = new EventType newEvent.EventTypeID = txtEventID.text db.EventType.InsertOnSubmit(newEvent) db.SubmitChanges() 'To select the last ID of event' Dim lastEventID = (from e in db.EventType Select e.EventTypeID Order By EventTypeID Descending).first() Dim chkbx As CheckBoxList = CType(form1.FindControl("CheckBoxList1"), CheckBoxList) Dim newCustomerEventType = New CustomerEventType Dim i As Integer For i = 0 To chkbx.Items.Count - 1 Step i + 1 If (chkbx.Items(i).Selected) Then newCustomerEventType.INTEVENTTYPEID = lastEventID newCustomerEventType.INTSTUDENTTYPEID = chkbxStudentType.Items(i).Value db.CustomerEventType.InsertOnSubmit(newCustomerEventType) db.SubmitChanges() End If Next It works fine when I checked only 1 Single ID of CustomerEventType from CheckBoxList1. It inserts data into EventType with ID 1 and CustomerEventType ID 1. However, when I checked both of them, the error message said Cannot add an entity that already exists. Any suggestions please? Thx in advance.

    Read the article

  • Side effects of reordering columns in PostgreSQL

    - by Summer
    I sometimes re-order the columns in my Postgres DB. Since Postgres can only add columns at the end of tables, I end up re-ordering by adding new columns at the end of the table, setting them equal to existing columns, and then dropping the original columns. My question is: what does PostgreSQL do with the memory that's freed by dropped columns? Does it automatically re-use the memory, so a single record consumes the same amount of space as it did beforehand? But that would require a re-write of the whole table, so to avoid that, does it just keep a bunch of blank space around in each record? Thanks! ~S

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >