Search Results

Search found 45150 results on 1806 pages for 'oracle technology database'.

Page 273/1806 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Rails: Auto-Detecting Database Adapter

    - by Dex
    The new version of the ar-extensions gem requires that you load the appropriate adapter yourself. On my development side I use mysql, however Heroku uses PostgreSQL. For example, on my development side I need to do this: require 'ar-extensions/adapters/mysql' require 'ar-extensions/import/mysql' How can I audo-detect which adapter to use?

    Read the article

  • Database strucure for versioning and multiple languages

    - by phobia
    Hi, I'm wondering how to best solve the issue of content existing in multiple versions and multiple languages. An image of my current structure can be seen here: http://i46.tinypic.com/72fx3k.png Each content can only have one active version in each language, and that's how I'm curious on how to best solve. Right now I have a column of the contentversions table, which means for each change of active version I have to run a update and set active=false on all version and then a update to set active=true for the piece of content in question. Any thoughts? :)

    Read the article

  • Database design: Using hundred of fields for little values

    - by user964260
    I'm planning to develop a PHP Web App, it will mainly be used by registered users(sessions) While thinking about the DB design, I was contemplating that in order to give the best user experience possible there would be lots of options for the user to activate, deactivate, specify, etc. For example: - Options for each layout elements, dialog boxes, dashboard, grid, etc. - color, size, stay visible, invisible, don't ask again, show everytime, advanced mode, simple mode, etc. This would get like 100s of fields ranging from simple Yes/No or 1 to N values..., for each user. So, is it having a field for each of these options the way to go? or how do those CRMs or CMS or other Web Apps do it to store lots of 1-2 char long values? Do they group them on Text fields separated by a special char and then "explode" them as an array for runtime usage? thank you

    Read the article

  • How to implement Auto_Increment per User, on the same table?

    - by Jonas
    I would like to have multiple users that share the same tables in the database, but have one auto_increment value per user. I will use an embedded database, JavaDB and as what I know it doesn't support this functionality. How can I implement it? Should I implement a trigger on inserts that lookup the users last inserted row, and then add one, or are there any better alternative? Or is it better to implement this in the application code? Or is this just a bad idea? I think this is easier to maintain than creating new tables for every user. Example: table +----+-------------+---------+------+ | ID | ID_PER_USER | USER_ID | DATA | +----+-------------+---------+------+ | 1 | 1 | 2 | 3454 | | 2 | 2 | 2 | 6567 | | 3 | 1 | 3 | 6788 | | 4 | 3 | 2 | 1133 | | 5 | 4 | 2 | 4534 | | 6 | 2 | 3 | 4366 | | 7 | 3 | 3 | 7887 | +----+-------------+---------+------+ SELECT * FROM table WHERE USER_ID = 3 +----+-------------+---------+------+ | ID | ID_PER_USER | USER_ID | DATA | +----+-------------+---------+------+ | 3 | 1 | 3 | 6788 | | 6 | 2 | 3 | 4366 | | 7 | 3 | 3 | 7887 | +----+-------------+---------+------+ SELECT * FROM table WHERE USER_ID = 2 +----+-------------+---------+------+ | ID | ID_PER_USER | USER_ID | DATA | +----+-------------+---------+------+ | 1 | 1 | 2 | 3454 | | 2 | 2 | 2 | 6567 | | 4 | 3 | 2 | 1133 | | 5 | 4 | 2 | 4534 | +----+-------------+---------+------+

    Read the article

  • Database storage for high sample rate data in web app

    - by Jim
    I've got multiple sensors feeding data to my web app. Each channel is 5 samples per second and the data gets uploaded bundled together in 1 minute json messages (containing 300 samples). The data will be graphed using flot at multiple zoom levels from 1 day to 1 minute. I'm using Amazon SimpleDB and I'm currently storing the data in the 1 minute chunks that I receive it in. This works well for high zoom levels, but for full days there will be simply be too many rows to retrieve. The idea I've currently got is that every hour I can crawl through the data and collect together 300 samples for the last hour and store them in an hour Domain (table if you like). Does this sound like a reasonable solution? How have others implemented the same sort of systems?

    Read the article

  • php java in memory database

    - by msaif
    i need to load data as array to memory in PHP.but in PHP if i write $array= array("1","2"); in test.php then this $array variable is initialized every time user requests.if we request test.php 100 times by clicking 100 times browser refresh button then this $array variable will be executed 100 times. but i need to execute the $array variable only one time for first time request and subsequent request of test.php must not execute the $array variable.but only use that memory location.how can i do that in PHP. but in JAVA SEVRVLET it is easy to execute,just write the $array variable in one time execution of init() method of servlet lifecycle method and subsequent request of that servlet dont execute init() method but service() method but service() method always uses that $array memeory location. all i want to initilize $array variable once but use that memory loc from subsequent request in PHP.is there any possiblity in PHP?

    Read the article

  • Recreation of DB using "mysql mydb < mydb.sql" is really slow when the table has tens of millions of

    - by Jian Lin
    It seems that a MySQL database that has a table with tens of millions of records will get a big INSERT INTO statement when the following mysqldump some_db > some_db.sql is done to back up the database. (is it 1 insert statement that handles all the records?) So when reconstructing the DB using mysql some_db < some_db.sql then the CPU is hardly busy (about 1.8% usage by the mysql process... I don't see a mysqld either?) and also the hard disk doesn't seem to be too busy... Last time, the whole restore process took 5 hours. Is there a way to make it faster? Such as, when doing mysqldump, can it break the INSERT statement into shorter ones, so that the mysql doesn't need to parse the line so hard when restoring the DB?

    Read the article

  • Oracle Unique Indexes

    - by Melvin
    I was creating a new table today in 10g when I noticed an interesting behavior. Here is an example of what I did: CREATE TABLE test_table ( field_1 INTEGER PRIMARY KEY ); Oracle will by default, create a non-null unique index for the primary key. I double checked this. After a quick check, I find a unique index name SYS_C0065645. Everything is working as expected so far. Now I did this: CREATE TABLE test_table ( field_1 INTEGER, CONSTRAINT pk_test_table PRIMARY KEY (field_1) USING INDEX (CREATE INDEX idx_test_table_00 ON test_table (field_1))); After describing my newly created index idx_test_table_00, I see that it is non-unique. I tried to insert duplicate data into the table and was stopped by the primary key constraint, proving that the functionality has not been affected. It seems strange to me that Oracle would allow a non-unique index to be used for a primary key constraint. Why is this allowed?

    Read the article

  • Correct way to give users access to additional schemas in Oracle

    - by Jacob
    I have two users Bob and Alice in Oracle, both created by running the following commands as sysdba from sqlplus: create user $blah identified by $password; grant resource, connect, create view to $blah; I want Bob to have complete access to Alice's schema (that is, all tables), but I'm not sure what grant to run, and whether to run it as sysdba or as Alice. Happy to hear about any good pointers to reference material as well -- don't seem to be able to get a good answer to this from either the Internet or "Oracle Database 10g The Complete Reference", which is sitting on my desk.

    Read the article

  • DataSet v/s Database

    - by Hemanshu Bhojak
    While designing applications it is a very good practice to have all the business logic in one place. So why then we sometimes have the business logic in stored procs? Can we fetch all data from the DB and store it in a DataSet and then process it? What would be the performance of the app in this scenario?

    Read the article

  • Ruby on Rails and database associations

    - by Marco
    Hi to all, I'm new to the Ruby world, and there is something unclear to me in defining associations between models. The question is: where is the association saved? For example, if i create a Customer model by executing: generate model Customer name:string age:integer and then i create an Order model generate model Order description:text quantity:integer and then i set the association in the following way: class Customer < ActiveRecord::Base has_many :orders end class Order < ActiveRecord::Base belongs_to :customer end I think here is missing something, for example the foreign key between the two entities. How does it handle the associations created with the keywords "has_many" and "belongs_to" ? Thanks

    Read the article

  • ASP.Net 4.0 Database Created Pages

    - by Tyler
    I want to create asp.net 4.0 dynamic pages loaded from my MS SQL server. Basically, its a list of locations with informations. For example: Location1 would have the page www.site.com/location/location1.aspx Location44 would have the page www.site.com/location/location44.aspx I dont even know where to start with this, url writting maybe?

    Read the article

  • PHP - Select from database the same query

    - by How to PHP
    I created a table that contains the name of the user and his job, and created PHP page that shows me all the users that works doctor, I entered doctor into a variable then I selected from the table where Jobs equal to $doctor, that is great, but I need it to get the same Jobs into a table in the page and the other same jobs into a table in the same page. this is my code that shows only the users works doctor in one table, <html> <h1>Doctors</h1> </html> <?php mysql_connect('localhost','root',''); mysql_select_db('data'); $doctor='doctor'; $query= mysql_query("SELECT * FROM `users` WHERE `job` = '$doctor'")or die(mysql_error()); while ($arr = mysql_fetch_array($query)) $name= $arr['name']; echo $name; } ?> That shows me doctors when I put doctor in a variable I want to show all same Jobs in a table. Is there is a way to do this? Thanks :)

    Read the article

  • Oracle sequence but then in MS SQL Server

    - by Raymond
    In Oracle there is a mechanism to generate sequence numbers e.g.; CREATE SEQUENCE supplier_seq MINVALUE 1 MAXVALUE 999999999999999999999999999 START WITH 1 INCREMENT BY 1 CACHE 20; And then execute the statement supplier_seq.nextval to retrieve the next sequence number. How would you create the same functionality in MS SQL Server ? Edit: I'm not looking for ways to automaticly generate keys for table records. I need to generate a unique value that I can use as an (logical) ID for a process. So I need the exact functionality that Oracle provides.

    Read the article

  • How to use DML on Oracle temporary table without generating much undo log

    - by Sambath
    Hi, Using an Oracle temporary table does not generate much redo log as a normal table. However, the undo log is still generated. Thus, how can I write insert, update, or delete statement on a temporary table but Oracle will not generate undo log or generate as little as possible? Moreover, using /+append/ in the insert statement will generate little undo log. Am I correct? If not, could anyone explain me about using the hint /+append/? INSERT /*+APPEND*/ INTO table1(...) VALUES(...); Thank you.

    Read the article

  • Host...not allowed error in ODBC connection, while connecting to an online MySQL database

    - by sqlchild
    I have downloaded MySQL ODBC Connector 5.1. Now am trying to setup the DSN. But am getting the error: Connection Failed : [HY000] [MySQL] [ODBC 5.1 Driver]Host '117.x.x.x' is not allowed to connect to this MySQL server My server url is server.myweb.com - this name am entering in the TCP/IP Server and Port =3306. I have also entered the userid and password , which is the one which i enter when i open www.myweb.com/cpanel Is this a version problem? Should the version of MySQL on my server also be 5.1, i.e. the one of the ODBC? Please help.

    Read the article

  • php database session handling problem in IE8!

    - by psyb0rg
    I've got an html page from where Im making this call periodically: function logon(id) { $.get("data.php", { action: 'online', userID: id}, function(data){ $("#msg").html(data); }); } What this does is it calls this SQL script in data.php: $sql = "update user_sessions set expires=(expires + 2) where userID = $userID"; mysql_query($sql, $conn) or die(mysql_error()); echo $sql; I can see by the echo that the sql syntax and values are correct, but THE CHANGES TO THE expires FIELD ARE NOT DONE, ONLY IN IE8!! It works fine in other ff, safari, chrome, ie6 and 7. There is nothing browser specific about making this sql call, but the user_sessions table is used to store PHP's sessions. Im only increasing the session expiry time when the call is made. What in IE8's session handling is preventing the session time from changing? Is there any caching or cookie problem that needs to be changed?

    Read the article

  • MySQL Database Design with Internationalization

    - by Some name
    Hello, I'm going to start work on a medium sized application, and i'm planning it's db design. One thing that I'm not sure about is this. I will have many tables which will need internationalization, such as: "membership_options, gender_options, language_options etc" Each of these tables will share common i18n fields, like: "title, alternative_title, short_description, description" In your opinion which is the best way to do it? Have an i18n table with the same fields for each of the tables that will need them? or do something like: Membership table Gender table ---------------- -------------- id | created_at id | created_at 1 - 22.03.2001 1 - 14.08.2002 2 - 22.03.2001 2 - 14.08.2002 General translation table ------------------------- record_id | table_name | string_name | alternative_title| .... |id_language 1 - membership regular null 1 (english) 1 - membership normale null 2 (italian) 1 - gender man null 1(english) 1 -gender uomo null 2(italian) This would avoid me repeating something like: membership_translation table ----------------------------- membership_id | name | alternative_title | id_lang 1 regular null 1 1 normale null 2 gender_translation table ----------------------------- gender_id | name | alternative_title | id_lang 1 man null 1 1 uomo null 2 and so on, so i would probably reduce the number of db tables, but i'm not sure about performance.I'm not much of a DB designer, so please let me know.

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >