Search Results

Search found 36946 results on 1478 pages for 'sample database'.

Page 337/1478 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • PhpMyAdmin; Should I disable root login?

    - by Camran
    I have this setup in Phpmyadmin: USER HOST PASSW PRIVILEGES GRANT debian-sys-maint localhost Yes ALL PRIVILEGES YES phpmyadmin localhost Yes USAGE NO root 127.0.0.1 Yes ALL PRIVILEGES YES root localhost Yes ALL PRIVILEGES YES root my_hostname Yes ALL PRIVILEGES YES username localhost Yes ALL PRIVILEGES YES Where "username" is my username and "my_hostname" is my hostname. I am currently only logging in as the last one (username, localhost). Also, I have php which also uses the last ones login details. Should I disable the other ones? And, what other security measures should I take? BTW: My server is Linux and I have root access. Thanks

    Read the article

  • Is this safe? Is this OK to do in MYSQL?

    - by alex
    I have always done this: mysqldump -hlocalhost -uuser -ppass MYDATABASE > /home/f/db_backup/MYDATABASE.sql mysql -uuser -ppass MYDATABASE < MYDATABASE.sql But, if I do this instead...is this safe? Is this identical to the above??? mysqldump -hlocalhost -uuser -ppass MYDATABASE | gzip > /home/f/db_backup/MYDATABASE.sql.gz zcat MYDATABASE.sql.gz | mysql -uuser -ppass MYDATABASE

    Read the article

  • Any way to make this PostgreSQL count query any faster?

    - by Ben Dauphinee
    I'm running a case-insensitive search on a table with 7.2 million rows, and I was wondering if there was any way to make this query any faster? Currently, it takes approx 11.6 seconds to execute, with just one search parameter, and I'm worried that as soon as I add more than one, this query will become massively slow. SELECT count(*) FROM "exif_parse" WHERE (description ~* 'canon')

    Read the article

  • About curse of dimensionality

    - by Dan
    My question is about this topic I've been reading about a bit. Basically my understanding is that in higher dimensions all points end up being very close to each other. The doubt I have is whether this means that calculating distances the usual way (euclidean for instance) is valid or not. If it were still valid, this would mean that when comparing vectors in high dimensions, the two most similar wouldn't differ much from a third one even when this third one could be completely unrelated. Is this correct? Then in this case, how would you be able to tell whether you have a match or not?

    Read the article

  • What's the most simple way to retrieve all data from a table and save it back in .NET 3.5?

    - by zoman
    I have a number of tables containing some basic (business related) mapping data. What's the most simple way to load the data from those tables, then save the modified values back. (all data should be replaced in the tables) An ORM is out of question as I would like to avoid creating domain objects for each table. The actual editing of the data is not an issue. (it is exported into Excel where the data is edited, then the file is uploaded with the modified data) The technology is .NET 3.5 (ASP.NET MVC) and SQL Server 2005. Thanks.

    Read the article

  • Table naming convention?

    - by MattSlay
    In our manufacturing shop, each Employee hits the time clock every time they change Jobs or Machines (work centers) during their work day. Each record created in the Time Clock app has foreign keys that link the record to: the Employee, the Job, and the Machine which they are about to operate. I’m trying to determine the best name for this table… If I were tempted to call it ClockRecords or TimeClockRecords, why wouldn’t I also consider naming it JobTimeRecords, or why not MachineTimeRecords. Any ideas on a good name?

    Read the article

  • How to handle ids and polymorphic associations in views if compound keys are not supported?

    - by duncan
    I have a Movie plan table: movie_plans (id, description) Each plan has items, which describe a sequence of movies and the duration in minutes: movie_plan_items (id, movie_plan_id, movie_id, start_minutes, end_minutes) A specific instance of that plan happens in: movie_schedules (id, movie_plan_id, start_at) However the schedule items can be calculated from the movie_plan_items and the schedule start time by adding the minutes create view movie_schedule_items as select CONCAT(p.id, '-', s.id) as id, s.id as movie_schedule_id, p.id as movie_plan_item_id, p.movie_id, p.movie_plan_id, (s.start_at + INTERVAL p.start_minutes MINUTE) as start_at, (s.start_at + INTERVAL p.end_minutes MINUTE) as end_at from movie_plan_items p, movie_schedules s where s.movie_plan_id=p.movie_plan_id; I have a model over this view (readonly), it works ok, except that the id is right now a string. I now want to add a polymorphic property (like comments) to various of the previous tables. Therefore for movie_schedule_items I need a unique and persistent numeric id. I have the following dilemma: I could avoid the id and have movie_schedule_items just use the movie_plan_id and movie_schedule_id as a compound key, as it should. But Rails sucks in this regard. I could create an id using String#hash or a md5, thus making it slower or collision prone (and IIRC String#hash is no longer persistent across processes in Ruby 1.9) Any ideas on how to handle this situation?

    Read the article

  • How to translate this 2 queries from Mysql to Postgresql? :

    - by xRobot
    How can I translate this 2 queries in postgresql ? : CREATE TABLE `example` ( `id` int(10) unsigned NOT NULL auto_increment, `from` varchar(255) NOT NULL default '0', `message` text NOT NULL, `lastactivity` timestamp NULL default '0000-00-00 00:00:00', `read` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `from` (`from`) ) DEFAULT CHARSET=utf8; Query: SELECT * FROM table_1 LEFT OUTER JOIN table_2 ON ( table_1.id = table_2.id ) WHERE (table_1.lastactivity > NOW()-100);

    Read the article

  • how to get result from this data.

    - by Shantanu Gupta
    I want to compute result from this table. I want quantity 1 - quantity2 as another column in the table shown below. this table has more such records I am trying to query but not been able to get result. select * from v order by is_active desc, transaction_id desc PK_GUEST_ITEM_ID FK_GUEST_ID QUANTITY TRANSACTION_ID IS_ACTIVE ---------------- -------------------- ---------------------- -------------------- ----------- 12963 559 82000 795 1 12988 559 79000 794 0 12987 559 76000 793 0 12986 559 73000 792 0 12985 559 70000 791 0 12984 559 67000 790 0 12983 559 64000 789 0 12982 559 61000 788 0 12981 559 58000 787 0 12980 559 55000 786 0 12979 559 52000 785 0 12978 559 49000 784 0 12977 559 46000 783 0 12976 559 43000 782 0 I want another column that will contain the subtraction of two quantities . DESIRED RESULT SHOULD BE SOMETHING LIKE THIS PK_GUEST_ITEM_ID FK_GUEST_ID QUANTITY Result TRANSACTION_ID IS_ACTIVE ---------------- -------------------- ---------------------- -------------------- ----------- 12963 559 82000 3000 795 1 12988 559 79000 3000 794 0 12987 559 76000 3000 793 0 12986 559 73000 3000 792 0 12985 559 70000 3000 791 0 12984 559 67000 3000 790 0 12983 559 64000 3000 789 0 12982 559 61000 3000 788 0 12981 559 58000 3000 787 0 12980 559 55000 3000 786 0 12979 559 52000 3000 785 0 12978 559 49000 3000 784 0 12977 559 46000 3000 783 0 12976 559 43000 NULL 782 0

    Read the article

  • How can an improvement to the query cache be tracked?

    - by Bill Paetzke
    I am parameterizing my web app's ad hoc sql. As a result, I expect the query plan cache to reduce in size and have a higher hit ratio. Perhaps even other important metrics will be improved. Could I use perfmon to track this? If so, what counters should I use? If not perfmon, how could I report on the impact of this change?

    Read the article

  • Using SQL Server for web applications

    - by rem
    As far as I understand, due to license reqirements all web applications, which use MS SQL Server, use SQL Server Express (free) or SQL Server web edition (processor license). Is it so? What are other specific features of SQL Server usage for web app?

    Read the article

  • I built my rails app with sqlite and without specifying any db field sizes, Is my app now foobared for production?

    - by Tim Santeford
    I've been following a lot of good tutorials on building rails apps but I seem to be missing the whole specifying and validating db field sizes part. I love not needing to have to think about it when roughing out an app (I would have never done this with a PHP or ASP.net app). However, now that I'm ready to go to production, I think I might have done myself a disservice by not specifying field sizes as I went. My production db will be MySQL. What is the best practice here? Do I need to go through all of my migration files and specify sizes, update all the models with validation, and update all my form partial views with input max widths? or am I missing a critical step in my development process?

    Read the article

  • 'Good' programming form in maintaining / updating / accessing files by entry

    - by zhermes
    Basic Question: If I'm storying/modifying data, should I access elements of a file by index hard-coded index, i.e. targetFile.getElement(5); via a hardcoded identifier (internally translated into index), i.e. target.getElementWithID("Desired Element"), or with some intermediate DESIRED_ELEMENT = 5; ... target.getElement(DESIRED_ELEMENT), etc. Background: My program (c++) stores data in lots of different 'dataFile's. I also keep a list of all of the data-files in another file---a 'listFile'---which also stores some of each one's properties (see below, but i.e. what it's name is, how many lines of information it has etc.). There is an object which manages the data files and the list file, call it a 'fileKeeper'. The entries of a listFile look something like: filename , contents name , number of lines , some more numbers ... Its definitely possible that I may add / remove fields from this list --- but in general, they'll stay static. Right now, I have a constant string array which holds the identification of each element in each entry, something like: const string fileKeeper::idKeys[] = { "FileName" , "Contents" , "NumLines" ... }; const int fileKeeper::idKeysNum = 6; // 6 - for example I'm trying to manage this stuff in 'good' programatic form. Thus, when I want to retrieve the number of lines in a file (for example), instead of having a method which just retrieves the '3'rd element... Instead I do something like: string desiredID = "NumLines"; int desiredIndex = indexForID(desiredID); string desiredElement = elementForIndex(desiredIndex); where the function indexForID() goes through the entries of idKeys until it finds desiredID then returns the index it corresponds to. And elementForIndex(index) actually goes into the listFile to retrieve the index'th element of the comma-delimited string. Problem: This still seems pretty ugly / poor-form. Is there a way I should be doing this? If not, what are some general ways in which this is usually done? Thanks!

    Read the article

  • Multiple column foreign key contraints

    - by eugene4968
    I want to setup table constraints for the following scenario and I’m not sure how to do it or if it’s even possible in SQL Server 2005. I have three tables A,B,C. C is a child of B. B will have a optional foreign key(may be null) referencing A. For performance reasons I also want table C to have the same foreign key reference to table A. The constraint on table C should be that C must reference its parent (B) and also have the same foreign key reference to A as its parent. Anyone have any thoughts on how to do this?

    Read the article

  • mysql query to get unique value from one column

    - by vesselyp
    i have a table named locations of which i want to select and get values in such a way that it should select only distinct values from a column but select all other values . table name: locations column names 1: country values : America, India, India, India column names 2: state/Province : Newyork, Punjab, Karnataka, kerala when i select i should get India only once and all the three states listed under India . is ther any way..??? sombody please help

    Read the article

  • Question about Benchmark function in Mysql ( Incredible results ).

    - by xRobot
    I have 2 tables: author with 3 millions of rows. book with 20 miles rows. . So I have benchmarked this query with a join: SELECT BENCHMARK(100000000, 'SELECT book.title, author.name FROM `book` , `author` WHERE book.id = author.book_id ') And this is the result: Query took 0.7438 sec ONLY 0.7438 seconds for 100 millions of query with a join ??? Do I make some mistakes or this is the right result ?

    Read the article

  • Beginner having difficulty with SQL query

    - by Vulcanizer
    Hi, I've been studying SQL for 2 weeks now and I'm preparing for an SQL test. Anyway I'm trying to do this question: For the table: 1 create table data { 2 id int, 3 n1 int not null, 4 n2 int not null, 5 n3 int not null, 6 n4 int not null, 7 primary key (id) 8 } I need to return the relation with tuples (n1, n2, n3) where all the corresponding values for n4 are 0. The problem asks me to solve it WITHOUT using subqueries(nested selects/views) It also gives me an example table and the expected output from my query: 01 insert into data (id, n1, n2, n3, n4) 02 values (1, 2,4,7,0), 03 (2, 2,4,7,0), 04 (3, 3,6,9,8), 05 (4, 1,1,2,1), 06 (5, 1,1,2,0), 07 (6, 1,1,2,0), 08 (7, 5,3,8,0), 09 (8, 5,3,8,0), 10 (9, 5,3,8,0); expects (2,4,7) (5,3,8) and not (1,1,2) since that has a 1 in n4 in one of the cases. The best I could come up with was: 1 SELECT DISTINCT n1, n2, n3 2 FROM data a, data b 3 WHERE a.ID <> b.ID 4 AND a.n1 = b.n1 5 AND a.n2 = b.n2 6 AND a.n3 = b.n3 7 AND a.n4 = b.n4 8 AND a.n4 = 0 but I found out that also prints (1,1,2) since in the example (1,1,2,0) happens twice from IDs 5 and 6. Any suggestions would be really appreciated.

    Read the article

  • MSSQL. Compare columns in two tables.

    - by maxt3r
    Hi, i've recently done a migration from a really old version of some application to the current version and i faced some problems while migrating databases. I need a query that could help me to compare columns in two tables. I mean not the data in rows, i need to compare the columns itself to figure out, what changes in table structure i've missed.

    Read the article

  • Embedded analog of CouchDB, same as sqlite for SQL Server

    - by Mike Chaliy
    I like an idea of document oriented databases like CouchDB. I am looking for simple analog. My requirements is just: persistance storage for schema less data; some simple in-proc quering; good to have transactions and versioning; ruby API; map/reduce is aslo good to have; should work on shared hosting What I do not need is REST/HTTP interfaces (I will use it in-proc). Also I do not need all scalability stuff.

    Read the article

  • Static selection and Ruby on Rails objects

    - by Dave
    Hi all- I have a simple problem, but am having trouble wrapping my head around it. I have an video object that should have one or more "genres". This list of genres should be prepopulated and then the user should just select one or more using autocomplete or some such. Here is the question: Is it worth creating a table with genres for the static selection? Or should it just be included in the presentation layer? If there is a static table, how do we name it correctly. I envision something like this class Video < ActiveRecord::Base has_many :genres ... end class Genre < ... belongs_to :video ... end But then we get a table called genre, that basically maps all the selected genres to their parent videos. There would need to be some static table to reference the static genres. Is this the best way to do it? Sorry if this was rambl-y a little stream of conciousness. Thanks!

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >