Search Results

Search found 31293 results on 1252 pages for 'database agnostic'.

Page 356/1252 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • PHP: Which DB/DB Engine supports search well?

    - by KeyStroke
    Hi, I'm starting a site which relies heavily on search. While it's probably going to search basic meta data in the beginning, it might grow to something bigger in the future. So which DB/DB Engine is best in your opinion when it comes to search performance and future scalability? Appreciate your help

    Read the article

  • extract transform load

    - by mitch
    Wikipedia defines a 'typical' ETL cycle as : Cycle initiation Build reference data Extract (from sources) Validate Transform (clean, apply business rules, check for data integrity, create aggregates or disaggregates) Stage (load into staging tables, if used) Audit reports (for example, on compliance with business rules. Also, in case of failure, helps to diagnose/repair) Publish (to target tables) Archive Clean up ..What is meant by 'Build reference data'?

    Read the article

  • what does "out of range" mean?

    - by user329820
    Hi I have checked these statements with mysql and no error will happen and also the out put will be 0 rows BUT my friend checked it and he found an error for SELECT becaouse it is out of range !! IS he correct? thanks CREATE TABLE T1(A INTEGER NULL); SELECT * FROM T1;

    Read the article

  • DB optimization to use it as a queue

    - by anony
    We have a table called worktable which has some columns(key(primary key), ptime, aname, status, content) we have something called producer which puts in rows in this table and we have consumer which does an order-by on the key column and fetches the first row which has status as 'pending'. The consumer does some processing on this row: 1. updates status to "processing" 2. does some processing using content 3. deletes the row we are facing contention issues when we try to run multiple consumers(probably due to the order-by which does a full table scan)... using Advanced queues would be our next step but before we go there we want to check what is the max throughput we can achieve with multiple consumers and producer on the table. What are the optimizations we can do to get the best numbers possible? Can we do an in-memory processing where a consumer fetches 1000 rows at a time processes and deletes? will that improve? What are other possibilities? partitioning of table? parallelization? Index organized tables?...

    Read the article

  • node.js storing gamestate, how?

    - by expressnoob
    I'm writing a game in javascript, and to prevent cheating, i'm having the game be played on the server (it's a board game like a more complicated checkers). Since the game is fairly complex, I need to store the gamestate in order to validate client actions. Is it possible to store the gamestate in memory? Is that smart? Should I do that? If so, how? I don't know how that would work. I can also store in redis. And that sort of thing is pretty familiar to me and requires no explanation. But if I do store in redis, the problem is that on every single move, the game would need to get the data from redis and interpret and parse that data in order to recreate the gamestate from scratch. But since moves happen very frequently this seems very stupid to me. What should I do?

    Read the article

  • Mysql query question

    - by brux
    I have 2 tables: Customer: customerid - int, pri-key,auto fname - varchar sname -varchar housenum - varchar street -varchar Items: itemid - int,pri-key,auto type - varchar collectiondate - date releasedate - date customerid - int I need a query which will get me all items that have a releasedate 3 days prior to (and including) the current date. i.e The query should return customerid,fname,sname,street,housenum,type,releasedate for all items which have releasedate within (and including)3 days prior today thanks in advance

    Read the article

  • How to create unique user key

    - by Grayson Mitchell
    Scenario: I have a fairly generic table (Data), that has an identity column. The data in this table is grouped (lets say by city). The users need an identifier in order for printing on paper forms, etc. The users can only access their cites data, so if they use the identity column for this purpose they will see odd numbers (e.g. a 'New York' user might see 1,37,2028... as the listed keys. Idealy they would see 1,2,3... (or something similar) The problem of course is concurrency, this being a web application you can't just have something like: UserId = Select Count(*)+1 from Data Where City='New York' Has anyone come up with any cunning ways around this problem?

    Read the article

  • Can I set ignore_dup_key on for a primary key?

    - by Mr. Flibble
    I have a two-column primary key on a table. I have attempted to alter it to set the ignore_dup_key to on with this command: ALTER INDEX PK_mypk on MyTable SET (IGNORE_DUP_KEY = ON); But I get this error: Cannot use index option ignore_dup_key to alter index 'PK_mypk' as it enforces a primary or unique constraint. How else should I set IGNORE_DUP_KEY to on?

    Read the article

  • retrieving the record which has only one value

    - by ashwani66476
    Hello All, Please suggest me a query, which retrieves only those record which has the single row in table. For Example table1. name age aaa 20 bbb 10 ccc 20 ddd 30 If I run "select distinct age from table1. result will be age 20 10 30 But I need a query, which give the result like name age bbb 10 ddd 30 Thanks....

    Read the article

  • When/Why to use Cascading in SQL Server?

    - by Joel Coehoorn
    When setting up foreign keys in SQL Server, under what circumstances should you have it cascade on delete or update, and what is the reasoning behind it? This probably applies to other databases as well. I'm looking most of all for concrete examples of each scenario, preferably from someone who has used them successfully.

    Read the article

  • get n records at a time from a temporary table

    - by Claudiu
    I have a temporary table with about 1 million entries. The temporary table stores the result of a larger query. I want to process these records 1000 at a time, for example. What's the best way to set up queries such that I get the first 1000 rows, then the next 1000, etc.? They are not inherently ordered, but the temporary table just has one column with an ID, so I can order it if necessary. I was thinking of creating an extra column with the temporary table to number all the rows, something like: CREATE TEMP TABLE tmptmp AS SELECT ##autonumber somehow##, id FROM .... --complicated query then I can do: SELECT * FROM tmptmp WHERE autonumber>=0 AND autonumber < 1000 etc... how would I actually accomplish this? Or is there a better way? I'm using Python and PostgreSQL.

    Read the article

  • Solr autocommit and autooptimize?

    - by Camran
    I will be uploading my website to a VPS soon. It is a classifieds website which uses Solr integrated with MySql. Solr is updated whenever a new classified is put or deleted. I need a way to make the commit() and optimize() be automated, for example once every 3 hours or so. How can I do this? (Details Please) When is it ideal to optimize? Thanks

    Read the article

  • Many tables for many users?

    - by Seagull
    I am new to web programming, so excuse the ignorance... ;-) I have a web application that in many ways can be considered to be a multi-tenant environment. By this I mean that each user of the application gets their own 'custom' environment, with absolutely no interaction between those users. So far I have built the web application as a 'single user' environment. In other words, I haven't actually done anything to support multi-users, but only worked on the functionality I want from the app. Here is my problem... What's the best way to build a multi-user environment: All users point to the same 'core' backend. In other words, I build the logic to separate users via appropriate SQL queries (eg. select * from table where user='123' and attribute='456'). Each user points to a unique tablespace, which is built separately as they join the system. In this case I would simply generate ALL the relevant SQL tables per user, with some sort of suffix for the user. (eg. now a query would look like 'select * from table_ where attribute ='456'). In short, it's a difference between "select * from table where USER=" and "select * from table_USER".

    Read the article

  • How to change data structure in mysql using mysqldump without deleting files

    - by Don Quixote
    Essentially what I'm trying to do is sync a production server with a sandbox server, but only the table structures and stored procedures. The procedures aren't any problem since they can be overriden, but the problem is the tables. I want to sync and alter their structures on the production server using mysqldump (or any other way that you can propose) without altering any existing data. If it helps, I only want to add more columns, not remove any existing ones. Also, I am using mysqlyog. Is there any way to do this?

    Read the article

  • Storing user settings in table - how?

    - by Mdillion
    I have settings for the user about 200 settings, these include notice settings and tracing settings from user activities on objects. The problem is how to store it in the DB? Should each setting be a row or a column? If colunm then table will have 200 colunms. If row then about 3 colunms but 200 rows per user x even 10 million users = not good. So how else can i store all these settings? NOTE: these settings are a mix of text entry and FK lookups to other tables. Thanks.

    Read the article

  • cakephp - form for belongsTo Model

    - by user1511579
    I createt the following model to link 2 relational tables: class Ficha extends AppModel { //public $useTable = 'ficha_seg'; var $primaryKey = 'id_ficha'; var $name = 'Ficha'; var $belongsTo = array( 'Perigo' => array( 'className' => 'Perigo', 'foreignKey' => false, 'conditions' => 'Perigo.id_fichas = Ficha.id_ficha' ) ); } Now, i have a form that requires data from the class Ficha, and then is redirected to another ctp page where i will input the data for the table "Perigos". However, since i'm still a newbie in cakephp i'm having difficult building that second form to insert the data on the table "Perigos". Here goes the code i built at the moment related to the second form: FichasController.php (the method where is it supposed to save the data on the table "Perigos": public function preencher_ficha(){ if ($this->request->is('ficha')) { $this->Ficha->create(); if ($this->Ficha->Perigo->save($this->request->data)) { $last_id=$this->Ficha->getLastInsertID(); $this->Session->setFlash('Your post has been updated '.$last_id.'.'); //$this->redirect(array('action' => 'preencher_ficha')); } else { $this->Session->setFlash('Unable to qualquer coisa your post.'); } } } The preencher_ficha.ctp file with the form: echo $this->Form->create('Ficha->Perigo', array('action' => 'index')); echo $this->Form->input('class_subst', array('label' => 'Classificação:')); echo $this->Form->input('simbolos_perigo', array('label' => 'Símbolos:')); echo $this->Form->input('frases_r', array('label' => 'Frases:')); echo $this->Form->end('Finalizar Ficha'); Here i guess the create part is wrong, but i think i have errors too in the controller part.

    Read the article

  • Logic: Best way to sample & count bytes of a 100MB+ file

    - by Jami
    Let's say I have this 170mb file (roughly 180 million bytes). What I need to do is to create a table that lists: all 4096 byte combinations found [column 'bytes'], and the number of times each byte combination appeared in it [column 'occurrences'] Assume two things: I can save data very fast, but I can update my saved data very slow. How should I sample the file and save the needed information? Here're some suggestions that are (extremely) slow: Go through each 4096 byte combinations in the file, save each data, but search the table first for existing combinations and update it's values. this is unbelievably slow Go through each 4096 byte combinations in the file, save until 1 million rows of data in a temporary table. Go through that table and fix the entries (combine repeating byte combinations), then copy to the big table. Repeat going through another 1 million rows of data and repeat the process. this is faster by a bit, but still unbelievably slow This is kind of like taking the statistics of the file. NOTE: I know that sampling the file can generate tons of data (around 22Gb from experience), and I know that any solution posted would take a bit of time to finish. I need the most efficient saving process

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >