Search Results

Search found 5233 results on 210 pages for 'records'.

Page 180/210 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • javascript toolkit for offline webapps

    - by anjanb
    hi all, we're building an survey webapp which will let the user to add new records to the survey when offline and will upload when the browser reconnects with the server. We've identified that this will need offline storage and hence google gears seems to be an obvious choice (we understand that adobe Flash has Offline Storage but not sure if that is the best way). I am aware of Dojo offline javascript toolkit which uses google gears for the underlying functionality. However, dojo offline is not part of the dojo toolkit after version 1.3. (currently dojo is 1.4.2). Google gears toolkit is currently frozen except for critical vulnerability fixes (it has not been updated almost for the last 1 yr) because they think that HTML 5 is the way to go ahead. Hence, we're looking for a higher abstraction on top of Google Gears engine TODAY, AND which will (in the future) switch the underlying engine to HTML5 if the browser supports HTML5 standards. We'd love to use Dojo but they have discontinued Dojo offline -- we'd prefer something that will be maintained for some time. Which are possible good strategies, JS toolkits/libraries to use for building this webapp ? Pls. advise.

    Read the article

  • Cache data in SQL CE database

    - by user93422
    Background I have an SQL CE database, that is constantly updated (every second). I have a (web) application that allows a user to look at the data in real-time. At some point a user can click "take a snapshot" button, and it will open the snapshot in a different window. And then on that form, there is "print" and "download" buttons that will either generate a page for printing, or will stream the data as CSV file - but same data snapshot has to be used, i.e. I can't go to the DB to get latest data for that. Details SQL CE dabatase is exposed through WCF web service. Snapshot consists of up to 500 records, 10 columns each. Expiration time on the snapshot of 2 hours is sufficient. It is a low-traffic application, so I don't expect more than few (5) connections at the same time. Loosing snapshot is not a big deal, user can simply generate new one. database is accessed by self-hosted WCF web service using Linq-to-SQL. Web site is ASP.NET MVC hosted on UltiDev Cassini. database, and web site are most likely be on the same box, when deployed. The entire app is intranet bound. Problem I need to cache the snapshot of the data at the moment user pressed "take a snapshot" button, so that I can use same data to generate print page, or generate a file for download. Solution 1: Each time there is a need to generate a snapshot, I will create a table in the database. Since there are no temp tables in SQL CE, I will need to clean it up myself. Solution 2: Cache the snapshot in-memory on either DB server, or web server. Question: Is there anything wrong with proposed solutions? Any different solution suggestions?

    Read the article

  • Rewrite a URL that's already been redirected?

    - by Jack
    Hi guys, I'm running an Apache2 web server with a dynamic IP address. I bought exampledomain.net, and I use no-ip.com's domain-update service to redirect any visitors to my current ip address (endnote #1). For example, someone visits exampledomain.net and they get redirected to 73.181.57.34. It works like a charm. However, it isn't all that user-friendly. Can I rewrite the redirected, ip-address URL? I tried these rewrite rules in the root folder's .htaccess... RewriteEngine On RewriteCond %{HTTP_HOST} ^73\.181\.57\.34:88 RewriteRule ^(.*)$ http://www.exampledomain.net/$1 [L,NC] # I simplified the RewriteCond. I would use regex in a real situation. Of course, this creates an infinite loop. The user visits www.exampledomain.net. They're redirected to 73.181.57.34:88 by no-ip. Apache redirects them to www.exampledomain.net which redirects them back to 73.181.57.34:88... so on and so forth. I'm a noob when it comes to rewriting, but is there a way to rewrite a URL without redirecting? I tried these rewrite rules too (a shot in the dark)... RewriteEngine On RewriteCond %{HTTP_HOST} ^73\.181\.57\.34:88 RewriteRule ^(.*)$ my.exampledomain.net/$1 [L,NC] # I'd read that Apache replied with a redirect header when you include http Thanks! (1) No-IP works like this: You download and install their dynamic update client on your server. Every couple of minutes it polls your server for its current external ip address. If it's changed, it updates your server's ip address in no-ip's records.

    Read the article

  • How do I ensure that a regex does not match an empty string?

    - by Dancrumb
    I'm using the Jison parser generator for Javascript and am having problems with my language specification. The program I'm writing will be a calculator that can handle feet, inches and sixteenths. In order to do this, I have the following specification: %% ([0-9]+\s*"'")?\s*([0-9]+\s*"\"")?\s*([0-9]+\s*"s")? {return 'FIS';} [0-9]+("."[0-9]+)?\b {return 'NUMBER';} \s+ {/* skip whitespace */} "*" {return '*';} "/" {return '/';} "-" {return '-';} "+" {return '+';} "(" {return '(';} ")" {return ')';} <<EOF>> {return 'EOF';} Most of these lines come from a basic calculator specification. I simply added the first line. The regex correctly matches feet, inch, sixteenths, such as 6'4" (six feet, 4 inches) or 4"5s (4 inches, 5 sixteenths) with any kind of whitespace between the numbers and indicators. The problem is that the regex also matches a null string. As a result, the lexical analysis always records a FIS at the start of the line and then the parsing fails. Here is my question: is there a way to modify this regex to guarantee that it will only match a non-zero length string?

    Read the article

  • How can I optimize retrieving lowest edit distance from a large table in SQL?

    - by Matt
    Hey, I'm having troubles optimizing this Levenshtein Distance calculation I'm doing. I need to do the following: Get the record with the minimum distance for the source string as well as a trimmed version of the source string Pick the record with the minimum distance If the min distances are equal (original vs trimmed), choose the trimmed one with the lowest distance If there are still multiple records that fall under the above two categories, pick the one with the highest frequency Here's my working version: DECLARE @Results TABLE ( ID int, [Name] nvarchar(200), Distance int, Frequency int, Trimmed bit ) INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@Source, [Name])) As Distance, Frequency, 'False' As Trimmed FROM MyTable INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@SourceTrimmed, [Name])) As Distance, Frequency, 'True' As Trimmed FROM MyTable SET @ResultID = (SELECT TOP 1 ID FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @Result = (SELECT TOP 1 [Name] FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultDist = (SELECT TOP 1 Distance FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultTrimmed = (SELECT TOP 1 Trimmed FROM @Results ORDER BY Distance, Trimmed, Frequency) I believe what I need to do here is to.. Not dumb the results to a temporary table Do only 1 select from `MyTable` Setting the results right in the select from the initial select statement. (Since select will set variables and you can set multiple variables in one select statement) I know there has to be a good implementation to this but I can't figure it out... this is as far as I got: SELECT top 1 @ResultID = ID, @Result = [Name], (dbo.Levenshtein(@Source, [Name])) As distOrig, (dbo.Levenshtein(@SourceTrimmed, [Name])) As distTrimmed, Frequency FROM MyTable WHERE /* ... yeah I'm lost */ ORDER BY distOrig, distTrimmed, Frequency Any ideas?

    Read the article

  • How To Share Information Between Django and Javascript?

    - by Randy
    So I am pretty new to both Django and Javascript (I am using JQuery) and I am wondering if I am doing a hack or if there are more slick ways to send client-side displayed database ids to the django server-side. Here is my process: I have a dataTable (http://datatables.net) that I am displaying rows of data by using the bProcessing option to use AJAX to retrieve records from the database. The URL in my urls.py is something like: url(r'^assets/activitylog/(?P<cid>.*)$', views.getActivityTable_ajax, name="activitylog_table"), and my dataTable ajax-relavant code is something like: "sAjaxSource": "/assets/activitylog/" + getIDFromHTML(), where the javascript function getIDFromHTML() grabs <cid> that is used by the Django view is simply: function getIDFromHTML(){ // Simply return the text in the #release_id div element from the HTML return $("#release_id").html(); }; This is the part that seems "hacky" to me. I am inserting into my template code the database id that I am using in the datatables URL (with display:none in the css) just so I can pass it back to the view. Most of this is necessitated because one cannot use django template tags in the javascript code unless the code is embedded into the HTML itself, which I am not (and will not) do. The only other thing that I have found is to change the URL to get rid of the parameter passed in to: url(r'^assets/activitylog', views.getActivityTable_ajax, name="activitylog_table"), and change the view code to: def getActivityTable_ajax(request): """Returns the activity for a given pid from HTTP GET ajax reqest""" pid = int(urlparse.urlparse(request.META['HTTP_REFERER']).path.split('/')[-1]) # rest of view code here... since the id that I need is on the end of this referer url. This way I don't have to monkey around with embedding the hidden database id into the HTML and passing it back to via ajax the the table population view code. Is it okay to use HTTP_REFERER in the request object in this manner? Am I going about this in the totally wrong way? Thanks in advance!

    Read the article

  • AjaxFileUpload return download panel when data is json

    - by Tr.Crab
    I use AjaxFileUpload (http://www.phpletter.com/Our-Projects/AjaxFileUpload/ ) to upload a file and get json result type response in struts2 ( code.google.struts2jsonresult.JSONResult ) but browser always pop-up download pane, plz give me some suggestions, thanks in advance Here is my config in struts.xml : ...... <result-type name="json" class="code.google.struts2jsonresult.JSONResult"> ............ <action name="doGetList" method="doGetList" class="main.java.GetListAction"> <result type="json"> <param name="target">jsonObject</param> <param name="deepSerialize">true</param> <param name="patterns"> -*.class</param> </result> </action> and js client : function ajaxFileUpload(){ $("#loading").ajaxStart(function(){ $(this).show(); }).ajaxComplete(function(){ $(this).hide(); }); $.ajaxFileUpload ( { url:'doGetList.do', secureuri:false, fileElementId:'uploadfile', dataType: 'json', success: function (data, status) { if(typeof(data.error) != 'undefined') { if(data.error != '') { alert(data.error); } else { alert(data.msg); } } }, error: function (data, status, e) { alert(e); alert(data.records); } } ) return false; }

    Read the article

  • Dynamically generate client-side HTML form control using JavaScript and server-side Python code in Google App Engine

    - by gisc
    I have the following client-side front-end HTML using Jinja2 template engine: {% for record in result %} <textarea name="remark">{{ record.remark }}</textarea> <input type="submit" name="approve" value="Approve" /> {% endfor %} Thus the HTML may show more than 1 set of textarea and submit button. The back-end Python code retrieves a variable number of records from a gql query using the model, and pass this to the Jinja2 template in result. When a submit button is clicked, it triggers the post method to update the record: def post(self): if self.request.get('approve'): updated_remark = self.request.get('remark') record.remark = db.Text(updated_remark) record.put() However, in some instances, the record updated is NOT the one that correspond to the submit button clicked (eg if a user clicks on record 1 submit, record 2 remark gets updated, but not record 1). I gather that this is due to the duplicate attribute name remark. I can possibly use JavaScript/jQuery to generate different attribute names. The question is, how do I code the back-end Python to get the (variable number of) names generated by the JavaScript? Thanks.

    Read the article

  • MySql Query lag time?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • Managing multiple customer databases in ASP.NET MVC application

    - by Robert Harvey
    I am building an application that requires separate SQL Server databases for each customer. To achieve this, I need to be able to create a new customer folder, put a copy of a prototype database in the folder, change the name of the database, and attach it as a new "database instance" to SQL Server. The prototype database contains all of the required table, field and index definitions, but no data records. I will be using SMO to manage attaching, detaching and renaming the databases. In the process of creating the prototype database, I tried attaching a copy of the database (companion .MDF, .LDF pair) to SQL Server, using Sql Server Management Studio, and discovered that SSMS expects the database to reside in c:\program files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabaseName.MDF Is this a "feature" of SQL Server? Is there a way to manage individual databases in separate directories? Or am I going to have to put all of the customer databases in the same directory? (I was hoping for a little better control than this). NOTE: I am currently using SQL Server Express, but for testing purposes only. The production database will be SQL Server 2008, Enterprise version. So "User Instances" are not an option.

    Read the article

  • database structure

    - by jindalsyogesh
    I have a table named ActivityRecording. This table currently has 500,000 records. I need to add a lot of new inputs that relates to activityrecording table. The relation of activityrecording with these new input fields is 1 to 0,1. So, what's going to happen on screen is when user fills the ActivityRecording data, he will then be taken to a new page and this page will show a form based on the user's input (from a dropdown named service) in activityrecording. There will 6 different kinds of form (each form will have 7-8 inputs which includes textareas of size 5kb, textboxes and checkboxes). So, for one activityrecording user will fill one out of 6 forms. There are two ways I know (there could be more), I can design the data structure: Add all the inputs from all these 6 forms into the activityrecording table. So, columns belonging to 5 of these forms will be null in this table, only columns belonging to one of the forms will have values The other way would be add 6 new tables (one for each form) and add 6 foreign key columns to activityrecording table. So, out of 6 foreign keys, 5 will be null and one will actually point to a table Which approach is a better data structure design? Please take into consideration that number of rows in this table are 500,000 and are expected to grow at a faster rate now.

    Read the article

  • Get the equivalent time between "dynamic" time zones

    - by doctore
    I have a table providers that has three columns (containing more columns but not important in this case): starttime, start time in which you can contact him. endtime, final hour in which you can contact him. region_id, region where the provider resides. In USA: California, Texas, etc. In UK: England, Scotland, etc starttime and endtime are time without timezone columns, but, "indirectly", their value has time zone of the region in which the provider resides. For example: starttime | endtime | region_id (time zone of region) | "real" st | "real" et ----------|----------|---------------------------------|-----------|----------- 03:00:00 | 17:00:00 | 1 (EGT => -1) | 02:00:00 | 16:00:00 Often I need to get the list of suppliers whose time range is within the current server time (taking into account the time zone conversion). The problem is that the time zones aren't "constant", ie, they may change during the summer time. However, this change is very specific to the region and not always carried out at the same time: EGT <= EGST, ART <= ARST, etc. The question is: 1. Is it necessary to use a webservice to update every so often the time zones in the regions? Does anyone know of a web service that can serve? 2. Is there a better approach to solve this problem? Thanks in advance. UPDATE I will give an example to clarify what I'm trying to get. In the table providers I found this records: idproviders | starttime | endtime | region_id ------------|-----------|----------|----------- 1 | 03:00:00 | 17:00:00 | 23 (Texas) 2 | 04:00:00 | 18:00:00 | 23 (Texas) If I execute the query in January, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +1 hour Server time = 02:00:00 I should get the following results: idproviders = 1 If I execute the query in June, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +2 hours (their local time has not changed, but their time zone has changed) Server time = 02:00:00 I should get the following results: idproviders = 1 and 2

    Read the article

  • How to send check boxes ID in gridview in one string for mass update.

    - by SmartDev
    hi , I have a grid which has check boxes and when selecting the top chek box select all will select all the check box in gridview and would update .for this im using for loop where it exceutes every time and this is taking lot of time cuase there are more thn 100 records in grid . try { string StrOutputMessageDisplayDocReqCsu = string.Empty; string strid = string.Empty; string strflag = string.Empty; string strSelected = string.Empty; for (int j = 0; j < GdvDocReqMU.Rows.Count; j++) { CheckBox Chkupdate = (CheckBox)GdvDocReqMU.Rows[j].Cells[1].FindControl("chkDR"); if (Chkupdate != null) { if (Chkupdate.Checked) { strid = ((Label)GdvDocReqMU.Rows[j].FindControl("lblIDDocReqCsu")).Text; strflag = ((Label)GdvDocReqMU.Rows[j].FindControl("lblStatusDocReqCsu")).Text; cmd = new SqlCommand("sp_Update_v1", con); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.AddWithValue("@id", strSelected); cmd.Parameters.AddWithValue("@flag", DdlStatusDocReqMU.SelectedValue); cmd.Parameters.AddWithValue("@notes", txtnotesDocReqMU.Text); cmd.Parameters.AddWithValue("@user", strUseridDRAhk); cmd.Parameters.Add(new SqlParameter("@message", SqlDbType.VarChar, 100, ParameterDirection.Output, false, 0, 50, "message", DataRowVersion.Default, null)); cmd.UpdatedRowSource = UpdateRowSource.OutputParameters; con.Open(); cmd.ExecuteNonQuery(); con.Close(); } } } //} //StrOutputMessageDisplayDocReqCsu += (string)cmd.Parameters["@message"].Value + "<br/>"; //lbldbmessDocReqAhk.Text += StrOutputMessageDisplayDocReqCsu + "<br>"; GetgridDocReq(); } catch (Exception ex) { lbldbmessDocReqAhk.Text = ex.Message.ToString(); }

    Read the article

  • how to use a parameterized function for the Default Binding of a Sql Server column

    - by Walt Gaber
    I have a table that catalogs selected files from multiple sources. I want to record whether a file is a duplicate of a previously cataloged file at the time the new file is cataloged. I have a column in my table (“primary_duplicate”) to record each entry as ‘P’ (primary) or ‘D’ (duplicate). I would like to provide a Default Binding for this column that would check for other occurrences of this file (i.e. name, length, timestamp) at the time the new file is being recorded. I have created a function that performs this check (see “GetPrimaryDuplicate” below). But I don’t know how to bind this function which requires three parameters to the table’s “primary_duplicate” column as its Default Binding. I would like to avoid using a trigger. I currently have a stored procedure used to insert new records that performs this check. But I would like to ensure that the flag is set correctly if an insert is performed outside of this stored procedure. How can I call this function with values from the row that is being inserted? USE [MyDatabase] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[FileCatalog]( [id] [uniqueidentifier] NOT NULL, [catalog_timestamp] [datetime] NOT NULL, [primary_duplicate] nchar NOT NULL, [name] nvarchar NULL, [length] [bigint] NULL, [timestamp] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_id] DEFAULT (newid()) FOR [id] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_catalog_timestamp] DEFAULT (getdate()) FOR [catalog_timestamp] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_primary_duplicate] DEFAULT (N'GetPrimaryDuplicate(name, length, timestamp)') FOR [primary_duplicate] GO USE [MyDatabase] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE FUNCTION [dbo].[GetPrimaryDuplicate] ( @name nvarchar(255), @length bigint, @timestamp datetime ) RETURNS nchar(1) AS BEGIN DECLARE @c int SELECT @c = COUNT(*) FROM FileCatalog WHERE name=@name and length=@length and timestamp=@timestamp and primary_duplicate = 'P' IF @c > 0 RETURN 'D' -- Duplicate RETURN 'P' -- Primary END GO

    Read the article

  • How does C#'s DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Given: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees "5/1/2010" to "5/4/2010." But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would allow the user to get the latest data and always pass the same end date to the sql proc--"5/5/2010" in this case. Please speak to this as well. Sample proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • SQL: select rows with the same order as IN clause

    - by Andrea3000
    I know that this question has been asked several times and I've read all the answer but none of them seem to completely solve my problem. I'm switching from a mySQL database to a MS Access database with MS SQL. In both of the case I use a php script to connect to the database and perform SQL queries. I need to find a suitable replacement for a query I used to perform on mySQL. I want to: perform a first query and order records alphabetically based on one of the columns construct a list of IDs which reflects the previous alphabetical order perform a second query with the IN clause applied with the IDs' list and ordered by this list. In mySQL I used to perform the last query this way: SELECT name FROM users WHERE id IN ($name_ids) ORDER BY FIND_IN_SET(id,'$name_ids') Since FIND_IN_SET is available only in mySQL and CHARINDEX and PATINDEX are not available from my php script, how can I achieve this? I know that I could write something like: SELECT name FROM users WHERE id IN ($name_ids) ORDER BY CASE id WHEN ... THEN 1 WHEN ... THEN 2 WHEN ... THEN 3 WHEN ... THEN 4 END but you have to consider that: IDs' list has variable length and elements because it depends on the first query that list can easily contains thousands of elements Have you got any hint on this? Is there a way to programmatically construct the ORDER BY CASE ... WHEN ... statement? Is there a better approach since my list of IDs can be big?

    Read the article

  • stackoverflow tags and related tags

    - by parminder
    Hi Experts, I am working on a website where a user can add tags to their posted books. It is similar to stackover flow, but I am keeping my tags in differnt table. so here are the tables/class in linq to entities. Books { bookId, Title } Tags { Id Tag } BooksTags { Id BookId TagId } Here are few sample records. Books BookId Title 113421 A 113422 B Tags Id Tag 1 ASP 2 C# 3 CSS 4 VB 5 VB.NET 6 PHP 7 java 8 pascal BooksTags Id BookId TagId 1 113421 1 2 113421 2 3 113421 3 4 113421 4 5 113422 1 6 113422 4 7 113422 8 Question 1 : I need to write something in linq to entities queries which gives me data according to the tags say if I want bookIds where tagid =1 it should return bookid 113421 and 113422 as it exists in both the books, but If I ask data for tags 1 and 2 it should return only book 113421 as that is the only book where both the tags are present. Question 2 : I need tags and their count too to show in related tags, so in first case my related tags class should have following result. RelatedTags Tag Count 2 1 3 1 4 2 8 1 in the second case when two tags are requested the result should be like RelatedTags Tag Count 3 1 4 1 I have get the first thing working by converting a sql query in linqer, but that seems like a hell. so want to know if there is any better idea. I have used dyanmic where clause to include two tags. So if someone can help. It will be much appreciated. Thanks Parminder

    Read the article

  • Help ! How do I get the total number rows from my SQL Server paging procedure ?

    - by The_AlienCoder
    Ok I have a table in my SQL Server database that stores comments. My desire is to be able to page though the records using [Back],[Next], page numbers & [Last] buttons in my data list. I figured the most efficient way was to use a stored procedure that only returns a certain number of rows within a particular range. Here is what I came up with @PageIndex INT, @PageSize INT, @postid int AS SET NOCOUNT ON begin WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) SELECT tmp.* FROM tmp WHERE Row between (@PageIndex - 1) * @PageSize + 1 and @PageIndex*@PageSize end RETURN Now everything works fine and I have been able implement [Next] and [Back] buttons in my data list pager. Now I need the total number of all comments (not in the current page) so that I can implement my page numbers and the[Last] button on my pager. In other words I want to return the total number of rows in my first select statement i.e WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) set @TotalRows = @@rowcount @@rowcount doesn't work and raises an error. I also cant get count.* to work either. Is there another way to get the total amount of rows or is my approach doomed.

    Read the article

  • Can I clone an IQueryable in linq? For UNION purposes?

    - by user169867
    I have a table of WorkOrders. The table has a PrimaryWorker & PrimaryPay field. It also has a SecondaryWorker & SecondaryPay field (which can be null). I wish to run 2 very similar queries & union them so that it will return a Worker Field & Pay field. So if a single WorkOrder record had both the PrimaryWorker and SecondaryWorker field populated I would get 2 records back. The "where clause" part of these 2 queries is very similar and long to construct. Here's a dummy example var q = ctx.WorkOrder.Where(w => w.WorkDate >= StartDt && w.WorkDate <= EndDt); if(showApprovedOnly) { q = q.Where(w => w.IsApproved); } //...more filters applied Now I also have a search flag called "hideZeroPay". If that's true I don't want to include the record if the worker was payed $0. But obviously for 1 query I need to compare the PrimaryPay field and in the other I need to compare the SecondaryPay field. So I'm wondering how to do this. Can I clone my base query "q" and make a primary & secondary worker query out of it and then union those 2 queries together? I'd greatly appreciate an example of how to correctly handle this. Thanks very much for any help.

    Read the article

  • Optimize a MySQL count each duplicate Query

    - by Onema
    I have the following query That gets the city name, city id, the region name, and a count of duplicate names for that record: SELECT Country_CA.City AS currentCity, Country_CA.CityID, globe_region.region_name, ( SELECT count(Country_CA.City) FROM Country_CA WHERE City LIKE currentCity ) as counter FROM Country_CA LEFT JOIN globe_region ON globe_region.region_id = Country_CA.RegionID AND globe_region.country_code = Country_CA.CountryCode ORDER BY City This example is for Canada, and the cities will be displayed on a dropdown list. There are a few towns in Canada, and in other countries, that have the same names. Therefore I want to know if there is more than one town with the same name region name will be appended to the town name. Region names are found in the globe_region table. Country_CA and globe_region look similar to this (I have changed a few things for visualization purposes) CREATE TABLE IF NOT EXISTS `Country_CA` ( `City` varchar(75) NOT NULL DEFAULT '', `RegionID` varchar(10) NOT NULL DEFAULT '', `CountryCode` varchar(10) NOT NULL DEFAULT '', `CityID` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`City`,`RegionID`), KEY `CityID` (`CityID`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; AND CREATE TABLE IF NOT EXISTS `globe_region` ( `country_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_name` varchar(50) COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`country_code`,`region_code`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; The query on the top does exactly what I want it to do, but It takes way too long to generate a list for 5000 records. I would like to know if there is a way to optimize the sub-query in order to obtain the same results faster. the results should look like this City CityID region_name counter sheraton 2349269 British Columbia 1 sherbrooke 2349270 Quebec 2 sherbrooke 2349271 Nova Scotia 2 shere 2349273 British Columbia 1 sherridon 2349274 Manitoba 1

    Read the article

  • Avoid MySQL multi-results from SP with Execute

    - by hhyhbpen
    Hi, i have an SP like BEGIN DECLARE ... CREATE TEMPORARY TABLE tmptbl_found (...); PREPARE find FROM" INSERT INTO tmptbl_found (SELECT userid FROM ( SELECT userid FROM Soul WHERE .?.?. ORDER BY .?.?. ) AS left_tbl LEFT JOIN Contact ON userid = Contact.userid WHERE Contact.userid IS NULL LIMIT ?) "; DECLARE iter CURSOR FOR SELECT userid, ... FROM Soul ...; ... l:LOOP FETCH iter INTO u_id, ...; ... EXECUTE find USING ...,. . .,u_id,...; ... END LOOP; ... END// and it gives multi-results. Besides it's inconvenient, if i get all this multi-results (which i really don't need at all), about 5 (limit's param) for each of the hundreds of thousands of records in Soul, i'm afraid it will take all my memory (and all in vain). Also, i noticed, if i do prepare from an empty string, it still has multi-results... At least how to get rid of them in the execute statement? And i would like to have a recipe to avoid ANY output from SP, for any possible statement (i also have a lot of "update ..."s and "select ... into "s inside, if they can produce multi's). Tnx for any help...

    Read the article

  • Hibernate not saving foreign key, but with junit it's ok

    - by Leonardo
    Hi All, I have this strange problem. In a J2ee webapp with spring, smartgwt and hibernate, it happens that I have a class A wich has a set of class B, both of them mapped to table A and table B. I wrote a simple test case for testing the service manager which is supposed to do insert, update, delete and everything work as expected especially during insert. In the end I have one record in A and records in B with foreign key to A. But when I try to call the service from the web app, the entity in B are saved without a foreign key reference. I am sure that the service is the same. One thing I noticed is that enabling hibernate logging, seems that when the service is called from the application, one more update is made: insert A insert B update A update B update B (foreign key only) update A <--- ??? update B <--- ??? Instead, when junit test case is run, the update is as follows: insert A insert B update A update B update B (foreign key only) I suppose the latest update is what is causing the erroe, maybe it is overwriting values. Considering that the app is using spring, with the well known mechanism of DAO + Manager, where can I investigate to solve this issue ? Someone told me that the session is not closed, so hibernate would do one more update before release the objects by itself. I am pretty sure that all the configuration hbm, xml, and the rest are fine...but I maybe wrong. thanks

    Read the article

  • How do I programatically verify, create, and update SQL table structure?

    - by JYelton
    Scenario: I have an application (C#) that expects a SQL database and login, which are set by a user. Once connected, it checks for the existence of several table and creates them if not found. I'd like to expand on this by having the program be capable of adding columns to those tables if I release a new version of the program which relies upon the new columns. Question: What is the best way to programatically check the structure of an existing SQL table and create or update it to match an expected structure? I am planning to iterate through the list of required columns and alter the existing table whenever it does not contain the new column. I can't help but wonder if there's an approach that is different or better. Criteria: Here are some of my expectations and self-imposed rules: Newer versions of the program might no longer use certain columns, but they would be retained for data logging purposes. In other words, no columns will be removed. Existing data in the table must be preserved, so the table cannot simply be dropped and recreated. In all cases, newly added columns would allow null data, so the population of old records is taken care of by having default null values. Example: Here is a sample table (because visual examples help!): id sensor_name sensor_status x1 x2 x3 x4 1 na019 OK 0.01 0.21 1.41 1.22 Then, in a new version, I may want to add the column x5. The "x-columns" are all data-storage columns that accept null.

    Read the article

  • Append json data to html class name

    - by user2898514
    I have a problem with my json code. I want to append each json value comes from a key to be appended to an html class name which matches the key of json data. here's my Live demo if you see the result in the life demo. it's only appending the last record. is it possible to make it show all records in order? json var json = '[{"castle":"big","commercial":"large","common":"sergio","cultural":"2009"},' + '{"castle":"big2","commercial":"large2","common":"sergio2","cultural":"20092"}]'; html <div class="castle"></div> <div class="commercial"></div> <div class="common"></div> <div class="cultural"></div> javascript var data = $.parseJSON(json); $.each(data, function(l,v) { $.each(v, function(k,o) { $('.'+k).attr('id', k+o); console.log($('#'+k+o).attr('id')); $('#'+k+o).text(o); }); }); for more illustration... I want the result in the live demo to look like this big large sergio 2009, big2 large2 sergio2 20092

    Read the article

  • JDBC transaction dead-lock solution required?

    - by user49767
    It's a scenario described my friend and challenged to find solution. He is using Oracle database and JDBC connection with read committed as transaction isolation level. In one of the transaction, he updates a record and executes selects statement and commits the transaction. when everything happening within single thread, things are fine. But when multiple requests are handled, dead-lock happens. Thread-A updates a record. Thread B updates another record. Thread-A issues select statement and waits for Thread-B's transaction to complete the commit operation. Thread-B issues select statement and waits for Thread-A's transaction to complete the commit operation. Now above causes dead-lock. Since they use command pattern, the base framework allows to issue commit only once (at the end of all the db operation), so they are unable to issue commit immediately after select statement. My argument was Thread-A supposed to select all the records which are committed and hence should not be issue. But he said that Thread-A will surely wait till Thread-B commits the record. is that true? What are all the ways, to avoid the above issue? is it possible to change isolation-level? (without changing underlying java framework) Little information about base framework, it is something similar to Struts action, their each and every request handled by one action, transaction begins before execution and commits after execution.

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >