Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 312/1698 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • Normalization in plain English

    - by Yada
    I sort of understand the concept of database normalization but always have a hard time explaining it in plain English especially for a job interview. I have read the wikipedia post, but still find it hard to explain the concept to none developers. "Design a database in a way not to get duplicated data" is the first thing that comes to mind. Does anyone was a nice way to explain the concept of database normalization in plain English. And what are some nice examples to show the differences between first, second and third normal forms. Say you go to a job interview and the person asks: Explain the concept of normalization and how would go about designing a normalized database. What key points are the interviewer looking for?

    Read the article

  • Huge mysql table with Zend Framework

    - by Uffo
    I have a mysql table with over 4 million of data; well the problem is that SOME queries WORK and SOME DON'T it depends on the search term, if the search term has a big volume of data in the table than I get the following error: Fatal error: Allowed memory size of 1048576000 bytes exhausted (tried to allocate 75 bytes) in /home/****/public_html/Zend/Db/Statement/Pdo.php on line 290 I currently have Zend Framework cache for metadata enabled, I have index on all the fields from that table.The site is running on a dedicated server with 2gb of ram. I've also set memory limit to: ini_set("memory_limit","1000M"); Any other things that I can optimize? Those are the types of query that I'm currently using: $do = $this->select() ->where('branche LIKE ?','%'.mysql_escape_string($branche).'%') ->order('premium DESC'); } //For name if(empty($branche) && empty($plz)) { $do = $this->select("MATCH(`name`) AGAINST ('{$theString}') AS score") ->where('MATCH(`name`) AGAINST( ? IN BOOLEAN MODE)', $theString) ->order('premium DESC, score'); } And a few other, but they are pretty much the same. Best Regards

    Read the article

  • Problem with ColdFusion communicating with MySQL database

    - by Greg
    Hi, I have been working to migrate a non-profit website from a local server (running Windows XP) to a GoDaddy hosting account (running Linux). Most of the pages are written in ColdFusion. Things have gone smoothly, up until this point. There is a flash form within the site (see this page: http://www.preservenet.cornell.edu/employ/submitjob.cfm) which, when completed, takes the visitor to this page: submitjobaction.cfm . I'm not quite sure what to make of this error, since I copied exactly what had been in the old MySQL database, and the .cfm files are exactly as they had been when they worked on the old server. Am I missing something? Below is the code from the database that the error seems to be referring to. When I change "Positionlat" to some default value in the database as it suggests in the error, it says that another field needs a default value, and it's a neverending chain of errors as I try to correct it. This is probably a stupid error that I'm missing, but I've been working at it for days and can't find what I'm missing. I would really appreciate any help. Thanks! -Greg DROP TABLE IF EXISTS employopp; CREATE TABLE employopp ( POSTID int(10) NOT NULL auto_increment, USERID varchar(10) collate latin1_general_ci default NULL, STATUS varchar(10) collate latin1_general_ci NOT NULL default 'ACTIVE', TYPE varchar(50) collate latin1_general_ci default 'professional', JOBTITLE varchar(70) collate latin1_general_ci default NULL, NUMBER varchar(30) collate latin1_general_ci default NULL, SALARY varchar(40) collate latin1_general_ci default NULL, ORGNAME varchar(70) collate latin1_general_ci default NULL, DEPTNAME varchar(70) collate latin1_general_ci default NULL, ORGDETAILS mediumtext character set utf8 collate utf8_unicode_ci, ORGWEBSITE varchar(200) collate latin1_general_ci default NULL, ADDRESS varchar(60) collate latin1_general_ci default 'none given', ADDRESS2 varchar(60) collate latin1_general_ci default NULL, CITY varchar(30) collate latin1_general_ci default NULL, STATE varchar(30) collate latin1_general_ci default NULL, COUNTRY varchar(3) collate latin1_general_ci default 'USA', POSTALCODE varchar(10) collate latin1_general_ci default NULL, EMAIL varchar(75) collate latin1_general_ci default NULL, NOMAIL varchar(5) collate latin1_general_ci default NULL, PHONE varchar(20) collate latin1_general_ci default NULL, FAX varchar(20) collate latin1_general_ci default NULL, WEBSITE varchar(200) collate latin1_general_ci default NULL, POSTDATE varchar(10) collate latin1_general_ci default NULL, POSTUNTIL varchar(20) collate latin1_general_ci default 'select date', POSTUNTILFILLED varchar(20) collate latin1_general_ci NOT NULL default 'until filled', texteHTML mediumtext character set utf8 collate utf8_unicode_ci, HOWTOAPPLY mediumtext character set utf8 collate utf8_unicode_ci, CONFIRSTNM varchar(30) collate latin1_general_ci default NULL, CONLASTNM varchar(60) collate latin1_general_ci default NULL, POSITIONCITY varchar(30) collate latin1_general_ci default NULL, POSITIONSTATE varchar(30) collate latin1_general_ci default NULL, POSITIONCOUNTRY varchar(3) collate latin1_general_ci default 'USA', POSITIONLAT varchar(50) collate latin1_general_ci NOT NULL, POSITIONLNG varchar(50) collate latin1_general_ci NOT NULL, PRIMARY KEY (POSTID) ) ENGINE=MyISAM AUTO_INCREMENT=2007 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci;

    Read the article

  • Getting percentage of "Count(*)" to the number of all items in "GROUP BY"

    - by celalo
    Let's say I need to have the ratio of "number of items available from certain category" to "the the number of all items". Please consider a MySQL table like this: /* mysql> select * from Item; +----+------------+----------+ | ID | Department | Category | +----+------------+----------+ | 1 | Popular | Rock | | 2 | Classical | Opera | | 3 | Popular | Jazz | | 4 | Classical | Dance | | 5 | Classical | General | | 6 | Classical | Vocal | | 7 | Popular | Blues | | 8 | Popular | Jazz | | 9 | Popular | Country | | 10 | Popular | New Age | | 11 | Popular | New Age | | 12 | Classical | General | | 13 | Classical | Dance | | 14 | Classical | Opera | | 15 | Popular | Blues | | 16 | Popular | Blues | +----+------------+----------+ 16 rows in set (0.03 sec) mysql> SELECT Category, COUNT(*) AS Total -> FROM Item -> WHERE Department='Popular' -> GROUP BY Category; +----------+-------+ | Category | Total | +----------+-------+ | Blues | 3 | | Country | 1 | | Jazz | 2 | | New Age | 2 | | Rock | 1 | +----------+-------+ 5 rows in set (0.02 sec) */ What I need is basically a result set resembles this one: /* +----------+-------+-----------------------------+ | Category | Total | percentage to the all items | (Note that number of all available items is "9") +----------+-------+-----------------------------+ | Blues | 3 | 33 | (3/9)*100 | Country | 1 | 11 | (1/9)*100 | Jazz | 2 | 22 | (2/9)*100 | New Age | 2 | 22 | (2/9)*100 | Rock | 1 | 11 | (1/9)*100 +----------+-------+-----------------------------+ 5 rows in set (0.02 sec) */ How can I achieve such a result set in a single query? Thanks in advance.

    Read the article

  • Sql query problem

    - by LiveEn
    I have the below sql query that will update the the values from a form to the database $sql="update leads set category='$Category',type='$stype',contactName='$ContactName',email='$Email',phone='$Phone',altphone='$PhoneAlt',mobile='$Mobile',fax='$Fax',address='$Address',city='$City',country='$Country',DateEdited='$today',printed='$Printed',remarks='$Remarks' where id='$id'"; $result=mysql_query($sql) or die(mysql_error()); echo '<h1>Successfully Updated!!.</h1>'; when i submit I dont get any errors and the success message is displayed but the database isnt updated . When i echo the $sql, all the values are set properly. and when i ech the $result i get the value 1. can someone please tell me what am i doing wrong here??

    Read the article

  • Silverlight -> WCF -> Database -> problem

    - by Billy
    Hi there, I have some silverlight code that calls a WCF service which then uses the Entity Framework to access the database and return records. Everything runs fine but ... when I replace the Entity Framework code with classic ADO.NET code I get an error: The remote server returned an error: NotFound When I call the ADO.NET code directly with a unit test it returns records fine so it's not a problem with the ADO.NEt code I used fiddler and it seems to say that the service cannot be found with a "500" error. i don't think it's anything to do with the service as the only thing I change is the technology to access the database. Anyone know what i'm missing here?

    Read the article

  • django: grouping in an order_by query?

    - by AP257
    Hi all, I want to allocate rankings to users, based on a points field. Easy enough you'd think with an order_by query. But how do I deal with the situation where two users have the same number of points and need to share the same ranking? Should I use annotate to find users with the same number of points? My current code, and a pseudocode description of what I'd like to do, are below. top_users = User.objects.filter(problem_user=False).order_by('-points_total') # Wrong - in pseudocode, this should be # Get the highest points_total, find all the users with that points_total, # if there is more than one user, set status to 'Joint first prize', # otherwise set status to 'First prize' top_users[0].status = "First prize" if (top_users[1]): top_users[1].status = "Second prize" if (top_users[2]): top_users[2].status = "Third prize" if (top_users[3]): top_users[3:].status = "Highly commended" The code above doesn't deal with the situation where two users have the same number of points and need to share second prize. I guess I need to create a query that looks for unique values of points_total, and does some kind of nested ranking? It also doesn't cope with the fact that sometimes there are fewer than 4 users - does anyone know how I can do (in pseudocode) 'if top_users[1] is not null...' in Python?

    Read the article

  • Populate JSP dropdown with database info

    - by Cano63
    I'm looking for the way to populate a JSP dropdown. I want that when the JSP loads it fills the dropdown with the info that I have in a database table. I'm including the code of my class that will create the array and fill it with the database info. What I don't know is how to call that class from my JSP and fill the dropdown. // this will create my array public static ArrayList<DropDownBrands> getBrandsMakes() { ArrayList<DropDownBrands> arrayBrandsMake = new ArrayList<DropDownBrands>(); while (rs.next()) { arrayBrandsMake.add(loadOB(rs)); // CARGO MI ARREGLO CON UN OBJETO } return arrayBrandsMake; } // this will load my array object private static DropDownBrands loadOB(ResultSet rs) throws SQLException { DropDownBrands OB = new DropDownBrands(); OB.setBrands("BRAN"); return OB; }

    Read the article

  • LDAP: Extend database using referral

    - by ecapstone
    My company uses an off-site LDAP server to handle authentication. I'm currently working on a local VPN for my branch that needs to use the off-site LDAP to check user's usernames and passwords, but I don't want every employee to have access to the VPN - I need to be able to control whether users can authenticate with the off-site LDAP based on whether they're allowed to use the VPN. My current solution involves having our own local LDAP server, which has a referral to the off-site server (I got most of my information from here: http://www.zytrax.com/books/ldap/ch7/referrals.html). This means that when local users try to check their credentials with the local server, it redirects them to the off-site server, which checks the credentials. This works for authentication, but not for authorization. It would be easiest to add a vpn_users group or is_vpn_user attribute on the off-site server, but, well, that's above my pay grade. Is there any way I can use the local server to control whether users have access to the VPN without needing to change the off-site server? If I could somehow use it to have a local vpn_users group without the users in it having to be located on the local server, that would probably work, but I have no idea how to set that up or if LDAP even supports such a configuration. For reference, I'm using the openvpn-auth-ldap (https://code.google.com/p/openvpn-auth-ldap/) plugin.

    Read the article

  • Sqlite3 "chained" query

    - by Arrieta
    I need to create a configuration file from a data file that looks as follows: MAN1_TIME '01-JAN-2010 00:00:00.0000 UTC' MAN1_RX 123.45 MAN1_RY 123.45 MAN1_RZ 123.45 MAN1_NEXT 'MAN2' MAN2_TIME '01-MAR-2010 00:00:00.0000 UTC' MAN2_RX 123.45 [...] MAN2_NEXT 'MANX' [...] MANX_TIME [...] This file describes different "legs" of a trajectory. In this case, MAN1 is chained to MAN2, and MAN2 to MANX. In the original file, the chains are not as obvious (i.e., they are non-sequential). I've managed to read the file and store in an Sqlite3 database (I'm using the Python interface). The table is stored with three columns: Id, Par, and Val; for instance, Id='MAN1', Par='RX', and Val='123.45'. I'm interested in querying such database for obtaining the information related to 'n' legs. In English, that would be: "Select RX,RY,RZ for the next five legs starting on MAN1" So the query would go to MAN1, retrieve RX, RY, RZ, then read the parameter NEXT and go to that Id, retrieve RX, RY, RZ; read the parameter NEXT; go to that one ... like this five times. How can I pass such query with "dynamic parameters"? Thank you.

    Read the article

  • Slow performance of MySQL database on one server and fast on another one, with similar configurations

    - by Alon_A
    We have a web application that run on two servers of GoDaddy. We experince slow preformance on our production server, although it has stronger hardware then the testing one, and it is dedicated. I'll start with the configurations. Testing: CentOS Linux 5.8, Linux 2.6.18-028stab101.1 on i686 Intel(R) Xeon(R) CPU L5609 @ 1.87GHz, 8 cores 60 GB total, 6.03 GB used Apache/2.2.3 (CentOS) MySQL 5.5.21-log PHP Version 5.3.15 Production: CentOS Linux 6.2, Linux 2.6.18-028stab101.1 on x86_64 Intel(R) Xeon(R) CPU L5410 @ 2.33GHz, 8 cores 120 GB total, 2.12 GB used Apache/2.2.15 (CentOS) MySQL 5.5.27-log - MySQL Community Server (GPL) by Remi PHP Version 5.3.15 We are running the same code on both servers. The Problem We have some function that executes ~30000 PDO-exec commands. On our testing server it takes about 1.5-2 minutes to complete and our production server it can take more then 15 minutes to complete. As you can see here, from qcachegrind: Researching the problem, we've checked the live graphs on phpMyAdmin and discovered that the MySQL server on our testing server was preforming at steady level of 1000 execution statements per 2 seconds, while the slow production MySQL server was only 250 executions statements per 2 seconds and not steady at all, jumping from 0 to 250 every seconds. You can clearly see it in the graphs: Testing server: Production server: You can see here the comparison between both of the configuration of the MySQL servers.Left is the fast testing and right is the slow production. The differences are highlighted, but I cant find anything that can cause such a behavior difference, as the configs are mostly the same. Maybe you can see something that I cant see. Note that our tables are all InnoDB, so the MyISAM difference is (probably) not relevant. Maybe it is the MySQL Community Server (GPL) that is installed on the production server that can cause the slow performance? Or maybe it needs to be configured differently for 64bit ? I'm currently out of ideas...

    Read the article

  • Nginx: Rewriting entire URI to query string

    - by Doug
    Still pretty new to nginx here, trying to get a simple rewrite to work, but the server just responds '404 not found' My server block server { listen 80; listen [::]:80; server_name pics.example.com; root /home/pics; rewrite ^(.*)$ index.php?tag=$1; location / { try_files $uri $uri/ $uri.php /index.html $uri =404; #try_files $uri =404; fastcgi_split_path_info ^([a-z]+)(/.+)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } } pics.example.com/foobear should rewrite to pics.example.com/index.php?tag=foobear

    Read the article

  • AD license query

    - by Rajeev
    Hi, A basic question about AD license, if on my company website, clients makes an account and would be authenticated through AD server, do i need to have the licenses for the clients coming via web. and what all license would come into picture.

    Read the article

  • Database Instance

    - by Sam
    I read a statement from an exercise: construct a database instance which conforms to diagram 1 but not to diagram 2. The diagrams are different n-ary relationships that have different relationships. Diagram 1 has a many to one to many to one relationship. Diagram 2 has many to many to many to one relationship. So, to really understand this problem, what does a database instance mean? Is it to make an example or abstract entities like a1, a2, or a3. Thanks for your time.

    Read the article

  • PostGIS - can't create spatially-enabled database

    - by itgorilla
    I'm using Ubuntu 10.10, PostgreSQL 9.0 and PostGIS 1.5. I've installed PostGIS 1.5 from: https://launchpad.net/~ubuntugis/+archive/ubuntugis-unstable I used PPA first then the command: sudo apt-get install postgis to install postgis. I've been following these instructions to create a spatially-enabled database: http://ostgis.refractions.net/docs/ch02.html#id2630100 I got to the point where it's saying: Now load the PostGIS object and function definitions into your database by loading the postgis.sql definitions file (located in [prefix]/share/contrib as specified during the configuration step). psql -d [yourdatabase] -f postgis.sql Well, there is no postgis.sql on my server after the installation. I did an sudo updatedb to make sure I can find postgis.sql but it's not there. Any ideas? Thank you!

    Read the article

  • How do you query namespaces with PHP/XPath/DOM

    - by Alsbury
    I am trying to query an XML document that uses namespaces. I have had success with xpath without namespaces, but no results with namespaces. This is a basic example of what I was trying. I have condensed it slightly, so there may be small issues in my sample that may detract from my actual problem. Sample XML: <?xml version="1.0"?> <sf:page> <sf:section> <sf:layout> <sf:p>My Content</sf:p> </sf:layout> </sf:section> </sf:page> Sample PHP Code: <?php $path = "index.xml"; $content = file_get_contents($path); $dom = new DOMDocument($content); $xpath = new DOMXPath($dom); $xpath->registerNamespace('sf', "http://developer.apple.com/namespaces/sf"); $p = $xpath->query("//sf:p", $dom); My result is that "p" is a "DOMNodeList Object ( )" and it's length is 0. Any help would be appreciated.

    Read the article

  • Storing date and time as epoch vs native datetime format in the database

    - by zakovyrya
    For most of my tasks I find it much easier to work with date and time in the epoch format: it's trivial to calculate timespan or determine if some event happened before or after another, I don't have to deal with time-zone issues if the data comes from different geographical sources, in case of scripting languages what I usually get from database when I request a datetime-typed column is a string that I need to parse in order to work with it. This list can go on, but for me in order to keep my code portable that's enough to ditch database's native datetime format and store date and time as integer. What do you guys think?

    Read the article

  • How to recreate missing Team Foundation Server database?

    - by Amadiere
    I've been trying out TFS 2010 Beta 2 on my local machine, or at least, had installed ready to do so. I had some issues with my MSSQL2008 server so I completely uninstalled and re-installed it and that sorted it. However, I'm now in limbo with TFS. I have the software installed, but it has none of the SQL databases installed that go with it. I had no data and am not precious about how to go about it. I figure completely uninstalling and re-installing might be an idea and will most likely fix it (repair didn't work). Is there a quicker way? Is there a command line utility that I can run, or a SQL script to recreate it all?

    Read the article

  • zabbix monitoring mysql database

    - by krisdigitx
    I have a server running multiple instances of mysql and also has the zabbix-agent running. In zabbix_agentd.conf i have specified: UserParameter=multi.mysql[*],mysqladmin --socket=$1 -uzabbixagent extended-status 2>/dev/null | awk '/ $3 /{print $$4}' where $1 is the socket instance. From the zabbix server i can run the test successfully. zabbix_get -s ip_of_server -k multi.mysql[/var/lib/mysql/mysql2.sock] and it returns all the values However the zabbix item/trigger does not generate the graphs, I have created a MACRO for $1 which is the socket location {$MYSQL_SOCKET1} = '/var/lib/mysql/mysql2.sock' and i use this key in items to poll the value multi.mysql[{$MYSQL_SOCKET1},Bytes_sent] LOGS: this is what i get on the logs: 3360:20120214:144716.278 item [multi.mysql['/var/lib/mysql/mysql2.sock',Bytes_received]] error: Special characters '\'"`*?[]{}~$!&;()<>|#@' are not allowed in the parameters 3360:20120214:144716.372 item [multi.mysql['/var/lib/mysql/mysql2.sock',Bytes_sent]] error: Special characters '\'"`*?[]{}~$!&;()<>|#@' are not allowed in the parameters Any ideas where the problem could be? FIXED {$MYSQL_SOCKET1} = /var/lib/mysql/mysql2.sock i removed the single quotes from the line and it worked...

    Read the article

  • mySQL - Separate Lastname,Firstname and CompanyName entries from a single column

    - by Decalmo
    I've got a column in a database which contains company names, and customer names all in one field... what I'd like to do is keep the CompanyName column completely intact, but wherever there is a comma in the CompanyName I'd like to take that information and populate it into a FirstName and LastName field. So that basically... Before: CompanyName: Big Company Inc Smith, John Sue, Maggie After: CompanyName: Big Company Inc Smith, John Sue, Maggie LastName: Smith Sue FirstName: John Maggie This one is pretty dang tricky for me... Any help is greatly appreciated!

    Read the article

  • Use the repository pattern when using PLINQO generated data?

    - by Chad
    I'm "upgrading" an MVC app. Previously, the DAL was a part of the Model, as a series of repositories (based on the entity name) using standard LINQ to SQL queries. Now, it's a separate project and is generated using PLINQO. Since PLINQO generates query extensions based on the properties of the entity, I started using them directly in my controller... and eliminated the repositories all together. It's working fine, this is more a question to draw upon your experience, should I continue down this path or should I rebuild the repositories (using PLINQO as the DAL within the repository files)? One benefit of just using the PLINQO generated data context is that when I need DB access, I just make one reference to the the data context. Under the repository pattern, I had to reference each repository when I needed data access, sometimes needing to reference multiple repositories on a single controller. The big benefit I saw on the repositories, were aptly named query methods (i.e. FindAllProductsByCategoryId(int id), etc...). With the PLINQO code, it's _db.Product.ByCatId(int id) - which isn't too bad either. I like both, but where it gets "harrier" is when the query uses predicates. I can roll that up into the repository query method. But on the PLINQO code, it would be something like _db.Product.Where(x = x.CatId == 1 && x.OrderId == 1); I'm not so sure I like having code like that in my controllers. Whats your take on this?

    Read the article

  • How to restore PostgreSQL database from .tar file?

    - by Stephen
    I have all PostgreSQL databases backed up during incremental backups using WHM, which creates a $dbName.tar file. Data is stored in these .tar files, but I do not know how to restore it back into the individual databases via SSH. In particular the file location. I have been using: pg_restore -d client03 /backup/cpbackup/daily/client03/psql/client03.tar which generates the error 'could not open input file: Permission denied' Any assistance appreciated.

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >