Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 323/1698 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • xpath query to find parent tag

    - by cru3l
    for example, i have this xml <elements> <a> <b>6</b> <b>5</b> <b>6</b> </a> <a> <b>5</b> <b>5</b> <b>6</b> </a> <a> <b>5</b> <b>5</b> <b>5</b> <b>5</b> </a> </elements> i need a xpath query, which must return me parent tag, only if all its children are equal to 5 (a[3] in this case). Something like that //b[text()="5"]/.. but with check of all children's tags. Please note that number of children tags can be different from node to node. It's possible with only xpath query? thanks

    Read the article

  • Data Usage Checker Tools

    - by Lucifer
    Hey All, I am about to begin a project for a new client, and am worried about a few things concerning data usage on their internet plan. We're in an area where most of the major networks don't cover the area, and the ones that do, have very expensive plans, with very low data allowance per month. I need to develop an app, but part of the problem lies with checking database values every 30 seconds. It's pretty important that this check is happening every 30 seconds, as the database is actually updated all day everyday, approx. every 5seconds (apparently). Each row in the database consists of about a page full of text if you were to paste it into MS Word. So, are there any logical ways of minimizing data usage in my case, and also how am I able to see exactly how much data is used just to establish a connection to the database? Are there any tools for this kind of info? Thanks :)

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • In PHP + MySQL, How do I join many tables with conditions

    - by Moe
    Hi, I'm trying to get the users full activity throughout the website. I need to Join many tables throughout the database, with that condition that it is one user. What I currently have written is: SELECT * FROM comments AS c JOIN rphotos AS r ON c.userID = r.userID AND c.userID = '$defineUserID'; But What it is returning is everything about the user, but it repeats rows. For instance, for one user he has 6 photos and 5 comments So I expect the join to return 11 rows. Instead it returns 30 results like so: PhotoID = 1; CommentID = 1; PhotoID = 1; CommentID = 2; PhotoID = 1; CommentID = 3; and so on... What am i doing wrong?

    Read the article

  • Performance of inter-database query (between linked servers)

    - by Swoosh
    I have an import between 2 linked servers. I basically got to get the data from a multiple join into a table on my side. The current query is something like this: select a.* from db1.dbo.tbl1 a inner join db1.dbo.tbl2 on ... inner join db1.dbo.tbl3 on ... inner join db1.dbo.tbl4 on ... inner join db2.dbo.myside on ... db1 = linked server db2 = my own database After this one, I am using an insert into + select to add this data in my table which is located in db2. (usually few hundred records - this import running once a minute) My question is related to performance. The tables on the linked server (tbl1, tbl2, tbl3, tbl4) are huge tables, with millions of records, and it is slowing down the import process. I was told that, if I do the join on the "other" side (db1 - linked server) for example in a stored procedure, than, even if the query looks the same, it would run faster. Is that right? This is kinda hard to test. Note that the join contains a table from my database too. Also. are there other "tricks" I could use in order to make this run faster? Thanks

    Read the article

  • Query specific nameserver for a particular domain upon VPN connect

    - by MT
    Some background: I have a work laptop with Ubuntu 9.10 on it. I have a small network at home where I've been running some basic services (for myself/my family) for 10 some years. In my home network there is a nameserver (Fedora) running Bind 9 with two "views". One view is the "outside" view and it provides name resolution (to the Internet at large) for email, a wiki, and a couple of blogs. The "inside" view provides name resolution (to the internal RFC1918 addresses of theses servers) as well as all the inside hosts, network equipment, ...etc. I connect with an openvpn client to my home network from outside (such as work). What I'd like to be able to do is resolve names on my internal network across this VPN (so I get the RFC1918 "inside" responses) without fully changing my resolver to the DNS server at my hose. For example, if I connect to the VPN from work, I can change my resolver (by editing resolv.conf) to the DNS server at my house (across the VPN) and then successfully resolve all of the inside DNS names on my home network. The issue I have with this is that now I'm no longer able to resolve "inside" names provided by my work's DNS servers (because I'm using my home DNS server). Alternatively, I can connect to the VPN and access my home severs via IP addresses directly, but this is inconvenient and causes issues with Apache name-based hosting (among other things). In the end, the effect I'm trying to achieve is as follows: When I connect to the VPN I automatically start sending DNS requests for *.myhomedomain.com to my home nameserver, but any other requests continue to go the the nameserver I was using before (the one I received on my company LAN via DHCP). When I disconnect the VPN, requests for *.myhomedomain.com go back to the local LAN DNS server (e.g. all requests are going there now). I'm looking for suggestion at to how this can be accomplished.

    Read the article

  • how to write T-SQL to compare and copy data?

    - by George2
    Hello everyone, I have two SQL Server 2008 Enterprise databases (on two machines), and one of the databases is master database and another database is slave database. I want to transfer update from a table in source database to a table in destination database (two tables are of the same schema, both of them are using a single column as unique primary key). The transfer rule is (in short, the rule is keeping the destination database the same as source database because of the update of the source database), if there is a new row in source database but not in destination database, insert the row in destination database; if a row not exists in source database but exists in destination database, delete the row in destination database; if a row's content (i.e. columns other than primary key columns) changes in source database, update the new content into destination database. thanks in advance, George

    Read the article

  • Warn user when new data is inserted on database

    - by João Menighin
    I don't know how to search about this so I'm kinda lost (the two topics I saw here were closed). I have a news website and I want to warn the user when a new data is inserted on the database. I want to do that like here on StackOverflow where we are warned without reloading the page or like in facebook where you are warned about new messages/notifications without reloading. Which is the best way to do that? Is it some kind of listener with a timeout that is constantly checking the database? It doesn't sounds efficient... Thanks in advance.

    Read the article

  • JQuery Ajax Updating MySQL Database, But Not Running Success Function

    - by myrmidon16
    I am currently using the JQuery ajax function to call an exterior PHP file, in which I select and add data in a database. Once this is done, I run a success function in JavaScript. What's weird is that the database is updating successfully when ajax is called, however the success function is not running. Here is my code: <!DOCTYPE html> <head> <script type="text/javascript" src="jquery-1.6.4.js"></script> </head> <body> <div onclick="addtask();" style="width:400px; height:200px; background:#000000;"></div> <script> function addtask() { var tid = (Math.floor(Math.random() * 3)) + 1; var tsk = (Math.floor(Math.random() * 10)) + 1; if(tsk !== 1) { $.ajax({ type: "POST", url: "taskcheck.php", dataType: "json", data: {taskid:tid}, success: function(task) {alert(task.name);} }); } } </script> </body> </html> And the PHP file: session_start(); $connect = mysql_connect('x', 'x', 'x') or die('Not Connecting'); mysql_select_db('x') or die ('No Database Selected'); $task = $_REQUEST['taskid']; $uid = $_SESSION['user_id']; $q = "SELECT task_id, taskname FROM tasks WHERE task_id=" .$task. " LIMIT 1"; $gettask = mysql_fetch_assoc(mysql_query($q)); $q = "INSERT INTO user_tasks (ut_id, user_id, task_id, taskstatus, taskactive) VALUES (null, " .$uid. ", '{$gettask['task_id']}', 0, 1)"; $puttask = mysql_fetch_assoc(mysql_query($q)); $json = array( "name" => $gettask['taskname'] ); $output = json_encode($json); echo $output; Let me know if you have any questions or comments, thanks.

    Read the article

  • Create DB in Sql Server based on Visio Data Model

    - by Yaakov Ellis
    I have created a database model in Visio Professional (2003). I know that the Enterprise version has the ability to create a DB in Sql Server based on the data in Visio. I do not have the option to install Enterprise. Aside from going through the entire thing one table and relationship at a time and creating the whole database from scratch, by hand, can anyone recommend any tool/utility/method for converting the visio database model into a Sql Script that can be used to create a new DB in Sql Server?

    Read the article

  • Suggestion on Database structure for relational data

    - by miccet
    Hi there. I've been wrestling with this problem for quite a while now and the automatic mails with 'Slow Query' warnings are still popping in. Basically, I have Blogs with a corresponding table as well as a table that keeps track of how many times each Blog has been viewed. This last table has a huge amount of records since this page is relatively high traffic and it logs every hit as an individual row. I have tried with indexes on the fields that are included in the WHERE clause, but it doesn't seem to help. I have also tried to clean the table each week by removing old ( 1.weeks) records. SO, I'm asking you guys, how would you solve this? The query that I know is causing the slowness is generated by Rails and looks like this: SELECT count(*) AS count_all FROM blog_views WHERE (created_at >= '2010-01-01 00:00:01' AND blog_id = 1); The tables have the following structures: CREATE TABLE IF NOT EXISTS 'blogs' ( 'id' int(11) NOT NULL auto_increment, 'name' varchar(255) default NULL, 'perma_name' varchar(255) default NULL, 'author_id' int(11) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, 'blog_picture_id' int(11) default NULL, 'blog_picture2_id' int(11) default NULL, 'page_id' int(11) default NULL, 'blog_picture3_id' int(11) default NULL, 'active' tinyint(1) default '1', PRIMARY KEY ('id'), KEY 'index_blogs_on_author_id' ('author_id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; And CREATE TABLE IF NOT EXISTS 'blog_views' ( 'id' int(11) NOT NULL auto_increment, 'blog_id' int(11) default NULL, 'ip' varchar(255) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, PRIMARY KEY ('id'), KEY 'index_blog_views_on_blog_id' ('blog_id'), KEY 'created_at' ('created_at') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

    Read the article

  • SQL Query to separate data into two fields

    - by Phillip
    I have data in one column that I want to separate into two columns. The data is separated by a comma if present. This field can have no data, only one set of data or two sets of data saperated by the comma. Currently I pull the data and save as a comma delimited file then use an FoxPro to load the data into a table then process the data as needed then I re-insert the data back into a different SQL table for my use. I would like to drop the FoxPro portion and have the SQL query saperate the data for me. Below is a sample of what the data looks like. Store Amount Discount 1 5.95 1 5.95 PO^-479^2 1 5.95 PO^-479^2 2 5.95 2 5.95 PO^-479^2 2 5.95 +CA8A09^-240^4,CORDRC^-239^7 3 5.95 3 5.95 +CA8A09^-240^4,CORDRC^-239^7 3 5.95 +CA8A09^-240^4,CORDRC^-239^7 In the data above I want to sum the data in the amount field to get a gross amount. Then pull out the specific discount amount which is located between the carat characters and sum it to get the total discount amount. Then add the two together and get the total net amount. The query I want to write will separate the discount field as needed, see store 2 line 3 for two discounts being applied, then pull out the value between carat characters.

    Read the article

  • get data from gridview without querying database

    - by frank2009
    Hi there I am new at this so please bear with me... I have managed to get the following code to work...so when I click on the "select" link in each row of the gridview, the data is transfered to other label/textbox on the webpage. So far so good, the thing is that everytime I click on select...it goes and checks on the database for the data and there is a delay of a few seconds... I was hoping that the data, since it is already visible on the gridrows, is simply "picked up" and used on other labels/textboxes...without requerying the database. Is this possible ? Thanks in advance Protected Sub GridView1_SelectedIndexChanged(ByVal sender As Object, ByVal e As System.EventArgs) Label1.Text = GridView2.SelectedRow.Cells(8).Text Label2.Text = GridView2.SelectedRow.Cells(9).Text TextBox1.Text = GridView2.SelectedRow.Cells(7).Text End Sub

    Read the article

  • NHibernate SchemaUpdate adding existing foreign keys again?

    - by afsharm
    I'm using SchemaUpdate to synchronize my hbms with existing database. Database has recently created based on hbms and is completely up-to-date. But SchemaUpdate generates all foreign key constraints again. For example suppose you have Student and Teacher. Student has association to Teacher with name ArtTeacher. ArtTeacher is a foreign key from Student to Teacher. Suppose database is up-to-date and currently holde Student, Teacher and their foreign key relation. So HBM and Database are equivalent. Know SchemaUpdate must not do anything but when I see its generated scripts, it re-produce that foreign key again. Why this happens? Is there any way to avoid it?

    Read the article

  • Linq to Sql query reuse

    - by UserControl
    Let's say we have a view V1 and V2 that is declared like: create view V2 as select V1.*, V2.C1, V2.C2, V2.C3 from V1 join V2 on V1.Key = V2.Key where bla-bla So V2 narrows down the result set of V1 and adds some joins. And there is a retrieval routine in C# IEnumerable<V1> GetData(MyFilter filter, MySortOrder order) {} I want to reuse with V2. Is it possible without performing joins in L2S rather than in database? Should i manually create a base class in the database context or something?

    Read the article

  • Notepad Tutorial: deleteDatabase() function

    - by FelixA
    Hello I have a short question to the notepad tutorial on the android website. I wrote a simple function in the tutorial code to delete the whole database. It looks like this: DataHelper.java public void deleteDatabase() { this.mDb.delete(DATABASE_NAME, null, null); } Notepadv1.java @Override public boolean onCreateOptionsMenu(Menu menu) { boolean result = super.onCreateOptionsMenu(menu); menu.add(0, DELETE_ID, 0, "Delete whole Database"); return result; } @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { case DELETE_ID: mDbHelper.deleteDatabase(); return true; } return super.onOptionsItemSelected(item); } But when I run the app and try to delete the database I will get this error in LogCat: sqlite returned: error code = 1, msg= no such table: data Can you help how to fix this problem. It seems that the function deleteDatabase can not reach the database. Thank you very much. Felix

    Read the article

  • Multiple/nested "select where" with Zend_Db_Select

    - by DJRayon
    Hi there I need to create something like this: select name from table where active = 1 AND (name LIKE 'bla' OR description LIKE 'bla') The first part is easy: $sqlcmd = $db->select() ->from("table", "name") ->where("active = ?", 1) Now comes the tricky part. How can I nest? I know that I can just write ->orWhere("name LIKE ? OR description LIKE ?", "bla") But thats wron, because I need to dynamically change all the parts. The query will be built all the time the script runs. Some parts get deleted, some altered. In this example I need to add those OR-s because sometimes I need to search wider. "My Zend Logic" tells me that the correct way is like this: $sqlcmd = $db->select() ->from("table", "name") ->where("active = ?", 1) ->where(array( $db->select->where("name LIKE ?", "bla"), $db->select->orWhere("description LIKE ?", "bla") )) But that doesn't work (atleast I dont remember it working). Please. Can someone help me to find a object oriented way for nesting "where"-s

    Read the article

  • trying to backup mysql database using php

    - by user225269
    I got this code from this site: http://www.php-mysql-tutorial.com/wikis/mysql-tutorials/using-php-to-backup-mysql-databases.aspx But I'm just a beginner so I don't know what the config.php and opendb.php suppose to mean. Do I have to create those 2 files in order for this code to work? If yes, then how do I create it, it isn't included in the site how to create it. <?php include 'config.php'; include 'opendb.php'; $tableName = 'mypet'; $backupFile = 'backup/mypet.sql'; $query = "SELECT * INTO OUTFILE '$backupFile' FROM $tableName"; $result = mysql_query($query); include 'closedb.php'; ?> can I just include these lines on the top code so that I will not be putting the include 'opendb.php' anymore: $con = mysql_connect("localhost","root",""); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("Hospital", $con);

    Read the article

  • Sharepoint Web Application

    - by Debasish Pramanik
    We are really in a mess. The following is what happened. We have taken backup of WSS_Content database. From Central Administration Page we remove the database. Added a new database for the website. Now we are getting the below error HTTP/1.1 404 Connection: close Date: Wed, 07 Apr 2010 10:04:54 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET MicrosoftSharePointTeamServices: 12.0.0.4518 Even replacing the database not helping. Can someone help us??

    Read the article

  • Windows service thread not updating database until complete

    - by dfarney
    I have a windows service with a FileSystemWatcher. When a new file is dropped in my folder a new thread is created, started, and I begin processing the file. Throughout this process I am making updates to the database (Linq to SQL) to keep track of the file's processing progress. Problem is none of my database updates are reflected throughout the process, just an update after everything has been completed. Any ideas? Note: when doing dev/testing my code was in an aspx page and worked great, but when I put it in a windows service I no longer get the progress updates. Thanks!

    Read the article

  • Can't connect to MySQL database hosted in CloudBees

    - by user3692698
    I have a free CloudBees account and created a free ClearDB database using their wizards. My trouble is when I use their connection information (whether I try to connect from my Java app, or an outside tool - SQLyog to be exact) I take the error: Access denied for user 'b51dbc5757d79f'@'%' to database 'mywiki. The username provided by CloudBees does not contain those extra characters that the error message is displaying which seems like it would be a problem, but I'm not sure there is anything I can do about that since everything is configured for me. The username I am given is: b51dbc5757d79f - which I can delete and rebuild after sharing here :)

    Read the article

  • Django form to enter/save html to database

    - by Ian
    I'm in my first week of Django development and am working on an admin page that will let me write some quick html using TinyMCE and then save it to the database. I don't need to display this web page on the site or add it to urls.py, etc. The html snippet will be loaded from the database and used in a view function. I've read in "Practical Django Projects" how to integrate TinyMCE, so my question is more concerned with the best approach for the form itself. Specifically: 1. Is there a built-in form like flatpage that works well for this? I only need one field in the form for the html. 2. How do I save the form's text after it's entered? I created a model with a JSONField to save the html in, but I'm not clear on what to do next. Thanks.

    Read the article

  • Update database using restful ASP.NET MVC 1.0

    - by Mike
    We are building an MVC 1.0 application right now and using Linq - SQL for our DAL. Following the restful approach we have multiple action events depending on if you are updating or loading or what not. The two that I'm concerned with at this point are the edit and update action methods. Here is the signatures. public ActionResult Update(int customerId, int Id, AggregateModel viewModel) public ActionResult Edit(int customerId, int Id, AggregateModel viewModel) So the idea is that in the Edit method we load up our viewModel and pass it to the view. The user makes changes to the model and then posts back to the Update method. The first thing I could think of to do would be to get the viewModel from the database again, copy the changes in one by one from the posted back model and then submit it back. This would work but does seem clunky to me. What is the best way to insert those changes into the database at this point?

    Read the article

  • query keepalived

    - by tdimmig
    *Note: I have trouble deciding what should go in serverfault and what should go in superuser, if some kindly admin decides this is in the wrong place please move it for me - many thanks. I am implementing a basic HA system with keepalived. I only want to be notified of the failover in the case of hardware failure. I do, however, have the servers switch roles periodically. I have a track_script running on the backup that will vary it's return between 0 and 1 on an interval (once a week, once a month, whatever). Upon returning 0, the priority is raised above that of the master, upon returning 1 the priority is lowered again. This way they trade places on the configured interval. The question: What can I do to tell the difference between a switch caused by my script, and a switch caused because one of the servers died? I certainly want to be notified when there is an actual problem, but not every time the servers change places because of the script. I see that version 1.2.7 has snmp support and I may be able to use it to get some information that could tell me one way or another, but to be honest I've never used snmp before and I don't know how to get the information I want with it (my Google foo failed me).

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >