Search Results

Search found 21942 results on 878 pages for 'named query'.

Page 408/878 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Adding relative week number column to MySQl results

    - by Anthony
    I have a table with 3 columns: user, value, and date. The main query returns the values for a specific user based on a date range: SELECT date, value FROM values WHERE user = '$user' AND date BETWEEN $start AND $end What I would like is for the results to also have a column indicating the week number relative to the date range. So if the date range is 1/1/2010 - 1/20/2010, then any results from the first Sun - Sat of that range are week 1, the next Sun - Sat are week 2, etc. If the date range starts on a Saturday, then only results from that one day would be week 1. If the date range starts on Thursday but the first result is on the following Monday, it would be week 2, and there are no week 1 results. Is this something fairly simple to add to the query? The only ideas I can come up with would be based on the week number for the year or the week number based on the results themselves (where in that second example above, the first result always gets week 1).

    Read the article

  • Why C# is not statically typed but F# and Haskell are?

    - by ??????? ???????
    There was a talk given by Brian Hurt about advantages and disadvantages of static typing. Brian said that by static typing he don't mean C#, but F# and Haskell. Is it because of dynamic keyword added to C#-4.0? But this feature is relatively rarely useful. By the way, there are ? and unsafeCoerse in Haskell which obviously are not the same, but something that could blown your head off in runtime similarly like exception thrown as a result of dynamic. Finally, why F# and Haskell could be named a statically typed languages and C# couldn't?

    Read the article

  • Exclusive filtering by tag

    - by KaptajnKold
    I'm using rails 3.0 and MySql 5.1 I have these three models: Question, Tag and QuestionTag. Tag has a column called name. Question has many Tags through QuestionTags and vice versa. Suppose I have n tag names. How do I find only the questions that have all n tags, identified by tag name. And how do I do it in a single query. (If you can convince me that doing it in more than one query is optimal, I'll be open to that) A pure rails 3 solution would be preferred, but I am not adverse to a pure SQL solution either.

    Read the article

  • Do you think that in the future it'll be possible to develop games on OS X by using Python and the latest library "Sprite kit" made by Apple? [on hold]

    - by Cesco
    I don't understand a lot about game engines and modules for Python, even though I'm aware of the existance of PyGame and Pyglets, so please don't bash me too hard if I'll wrote something wrong in this question :-) When I upgraded my Mac to the latest version of OS X, I noticed for the first time that Apple is providing a library named Sprite kit for developing games on both iOS and OS X. It looks to me fairly complete, and the fact is managed by a big company gives me the impression of being well-supported for the time being; in summary, it looks... cool. Actually in order to take advantage of "Sprite kit" you need to code in Obj-C. Since I don't know Obj-C but only a little bit of Python, do you think that there's a chance that sooner or later someone will make a wrapper for Python ? Thank you very much and best regards

    Read the article

  • Propel: How the "Affected Rows" Returned from doUpdate is defined

    - by Ngu Soon Hui
    In propel there is this doUpdate function, that will return the numbers of affected rows by this query. The question is, if there is no need to update the row ( because the set value is already the same as the field value), will those rows counted as the affected row? Take for example, I have the following table: ID | Name | Books 1 | S1oon | Me 2 | S1oon | Me Let's assume that I write a ORM function of the equivalent of the following query: update `new table` set Books='Me' where Name='S1oon'; What will the doUpdate result return? Will it return 0 ( because all the Books column are already Me, hence there is no need to update), or will it be 2 ( because there are 2 rows that fulfill the where condition) ?

    Read the article

  • insert a date in mysql database

    - by kawtousse
    I use a jquery datepicker then i read it in my servlet like that: String dateimput=request.getParameter("datepicker");//1 then parse it like that: System.out.println("datepicker:" +dateimput); DateFormat df = new SimpleDateFormat("MM/dd/yyyy"); java.util.Date dt = null; try { dt = df.parse(dateimput); System.out.println("date imput parssé1 est:" +dt); System.out.println("date imput parsée2 est:" +df.format(dt)); } catch (ParseException e) { e.printStackTrace(); } and insert query like that: String query = "Insert into dailytimesheet(trackingDate,activity,projectCode) values ("+df.format(dt)+", \""+activity+"\" ,\""+projet+"\")"; it pass successfully untill now but if i check the record inserted i found the date: 01/01/0001 00:00:00 l've tried to fix it but it still a mess for me.

    Read the article

  • Is a many-to-many relationship with extra fields the right tool for my job?

    - by whichhand
    Previously had a go at asking a more specific version of this question, but had trouble articulating what my question was. On reflection that made me doubt if my chosen solution was correct for the problem, so this time I will explain the problem and ask if a) I am on the right track and b) if there is a way around my current brick wall. I am currently building a web interface to enable an existing database to be interrogated by (a small number of) users. Sticking with the analogy from the docs, I have models that look something like this: class Musician(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) dob = models.DateField() class Album(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) class Instrument(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) Where I have one central table (Musician) and several tables of associated data that are related by either ForeignKey or OneToOneFields. Users interact with the database by creating filtering criteria to select a subset of Musicians based on data the data on the main or related tables. Likewise, the users can then select what piece of data is used to rank results that are presented to them. The results are then viewed initially as a 2 dimensional table with a single row per Musician with selected data fields (or aggregates) in each column. To give you some idea of scale, the database has ~5,000 Musicians with around 20 fields of related data. Up to here is fine and I have a working implementation. However, it is important that I have the ability for a given user to upload there own annotation data sets (more than one) and then filter and order on these in the same way they can with the existing data. The way I had tried to do this was to add the models: class UserDataSets(models.Model): user = models.ForeignKey(User) name = models.CharField(max_length=100) description = models.CharField(max_length=64) results = models.ManyToManyField(Musician, through='UserData') class UserData(models.Model): artist = models.ForeignKey(Musician) dataset = models.ForeignKey(UserDataSets) score = models.IntegerField() class Meta: unique_together = (("artist", "dataset"),) I have a simple upload mechanism enabling users to upload a data set file that consists of 1 to 1 relationship between a Musician and their "score". Within a given user dataset each artist will be unique, but different datasets are independent from each other and will often contain entries for the same musician. This worked fine for displaying the data, starting from a given artist I can do something like this: artist = Musician.objects.get(pk=1) dataset = UserDataSets.objects.get(pk=5) print artist.userdata_set.get(dataset=dataset.pk) However, this approach fell over when I came to implement the filtering and ordering of query set of musicians based on the data contained in a single user data set. For example, I could easily order the query set based on all of the data in the UserData table like this: artists = Musician.objects.all().order_by(userdata__score) But that does not help me order by the results of a given single user dataset. Likewise I need to be able to filter the query set based on the "scores" from different user data sets (eg find all musicians with a score 5 in dataset1 and < 2 in dataset2). Is there a way of doing this, or am I going about the whole thing wrong?

    Read the article

  • Interesting Row_Number() bug

    - by Joel Coehoorn
    I was playing with the Stack Exchange Data Explorer and ran this query: http://odata.stackexchange.com/stackoverflow/q/2828/rising-stars-top-50-users-ordered-on-rep-per-day Notice down in the results, rows 11 and 12 have the same value and so are mis-numbered, even though the row_number() function takes the same order by parameter as the query. I know the correct fix here is to specify an additional tie-breaker column in the order by clauses, but I'm more curious as to why/how the row_number() function returned different results on the same data? If it makes a difference anywhere, this runs on Azure.

    Read the article

  • making sure "expiration_date - X" falls on a valid "date_of_price" (if not, use the next valid date_

    - by bobbyh
    I have two tables. The first table has two columns: ID and date_of_price. The date_of_price field skips weekend days and bank holidays when markets are closed. table: trading_dates ID date_of_price 1 8/7/2008 2 8/8/2008 3 8/11/2008 4 8/12/2008 The second table also has two columns: ID and expiration_date. The expiration_date field is the one day in each month when options expire. table: expiration_dates ID expiration_date 1 9/20/2008 2 10/18/2008 3 11/22/2008 I would like to do a query that subtracts a certain number of days from the expiration dates, with the caveat that the resulting date must be a valid date_of_price. If not, then the result should be the next valid date_of_price. For instance, say we are subtracting 41 days from the expiration_date. 41 days prior to 9/20/2008 is 8/10/2008. Since 8/10/2008 is not a valid date_of_price, we have to skip 8/10/2008. The query should return 8/11/2008, which is the next valid date_of_price. Any advice would be appreciated! :-)

    Read the article

  • determine which value produced a hit in SOLR multivalued field type

    - by harschware
    If I have a multiValued field type of text, and I put values [cat,dog,green,blue] in it. Is there a way to tell when I execute a query against that field for dog, that it was in the 1st element position for that multiValued field? Assumption: client does not have any pre-knowledge of what the field type of the field being queried is. (i.e. Solr must provide the answer and the client can't post process the return doc to figure it out because it would not know how SOLR matched the query to the result). Disclosure: I posted to solr-user list and am getting no traction so I post here now.

    Read the article

  • Union All Won't work in stored procedure

    - by MyHeadHurts
    ALTER PROCEDURE [dbo].[MyStoredProcedure1] @YearToGet int AS Select Division, SDESCR, DYYYY, Sum(APRICE) ASofSales, Sum(PARTY) AS ASofPAX, Sum(NetAmount) ASofNetSales, Sum(InsAmount) ASofInsSales, Sum(CancelRevenue) ASofCXSales, Sum(OtherAmount) ASofOtherSales, Sum(CXVALUE) ASofCXValue From dbo.B101BookingsDetails Where Booked <= CONVERT(int,DateAdd(year, @YearToGet - Year(getdate()), DateAdd(day, DateDiff(day, 1, getdate()), 0) ) ) and (DYYYY = @YearToGet) Group By SDESCR, DYYYY, Division Having (DYYYY = @YearToGet) Order By Division, SDESCR, DYYYY union all SELECT DIVISION, SDESCR, DYYYY, SUM(APRICE) AS YESales, SUM(PARTY) AS YEPAX, SUM(NetAmount) AS YENetSales, SUM(InsAmount) AS YEInsSales, SUM(CancelRevenue) AS YECXSales, SUM(OtherAmount) AS YEOtherSales, SUM(CXVALUE) AS YECXValue FROM dbo.B101BookingsDetails Where (DYYYY=@YearToGet) GROUP BY SDESCR, DYYYY, DIVISION ORDER BY DIVISION, SDESCR, DYYYY The error I am getting is Msg 156, Level 15, State 1, Procedure MyStoredProcedure1, Line 36 Incorrect syntax near the keyword 'union'. But my goal here is the user inputs a year for example 2009, my first query will get all the sales made in 2009 to the same date it is was yesterday 12/23/2009, while the second query is getting 2009 totals up to dec 31 2009. I want the columns to be side by side not in one column

    Read the article

  • multiple rows of a single table

    - by Amanjot Singh
    i am having a table with 3 col. viz id,profile_id,plugin_id.there can be more than 1 plugins associated with a single profile now how can i fetch from the database all the plugins associated with a profile_id which comes from the session variable defined in the login page when I try to apply the query for the same it returns the data with the plugin_id of the last record the query is as follows SqlCommand cmd1 = new SqlCommand("select plugin_id from profiles_plugins where profile_id=" + Convert.ToInt32(Session["cod"]), con); SqlDataReader dr1 = cmd1.ExecuteReader(); if (dr1.HasRows) { while (dr1.Read()) { Session["edp1"] = Convert.ToInt32(dr1[0]); } } dr1.Close(); cmd1.Dispose();

    Read the article

  • Correct handling of return data

    - by Serhiy
    Hello, I have a question related to correct handling of returns of the DAO library I'm writing for one project. This library probably is going to be used by another people and I want to do it correctly. So I would like to know, how I should deal with return statements of the functions of my DAO. Example 1 I have function to getCustomer which should return String. In case query doesn't return any result should I return null, empty string or throw some kind of Exception? Example 2 I have function getCutomerList which return ArrayList. In case query doesn't return any result should I return null, empty ArrayList or throw some Exception? Example 3 Some sql exception was detected, what should I do: throw exception or do try..catch of the block where it can occur? What is the "good" practice or "best" practice to apply in my case? Thanks on advance, Serhiy.

    Read the article

  • How to use SQL - INSERT...ON DUPLICATE KEY UPDATE?

    - by Probocop
    Hi, I have a script which captures tweets and puts them into a database. I will be running the script on a cronjob and then displaying the tweets on my site from the database to prevent hitting the limit on the twitter API. So I don't want to have duplicate tweets in my database, I understand I can use 'INSERT...ON DUPLICATE KEY UPDATE' to achieve this, but I don't quite understand how to use it. My database structure is as follows. Table - Hash id (auto_increment) tweet user user_url And currently my SQL to insert is as follows: $tweet = $clean_content[0]; $user_url = $clean_uri[0]; $user = $clean_name[0]; $query='INSERT INTO hash (tweet, user, user_url) VALUES ("'.$tweet.'", "'.$user.'", "'.$user_url.'")'; mysql_query($query); How would I correctly use 'INSERT...ON DUPLICATE KEY UPDATE' to insert only if it doesn't exist, and update if it does? Thanks

    Read the article

  • SQL Server Reporting Services Format Hours as Hours:Minutes

    - by Frank Schmitt
    I'm writing reports on SQL Server Reporting Server that have a number of hours grouped by, say, user, and a total calculated based on the sum of the values. Currently my query runs a stored proc that returns the hours as in HH:MM format, rather than decimal hours, as our users find that more intuitive. The problem occurs when I try and add up the column using an SSRS expression, because the SUM function isn't smart enough to handle adding up times in this format. Is there any way to: Display a time interval (in minutes or hours) in HH:MM format while having it calculated in decimal form? Or split up and calculate the total of the HH:MM text values to arrive at a total as an expression? I'd like to avoid having to write/run a second query just to get the total.

    Read the article

  • Multiple foreign keys from one table linking to single primary key in second table

    - by croker10
    Hi all, I have a database with three tables, a household table, an adults table and a users table. The Household table contains two foreign keys, iAdult1ID and iAdult2ID. The Users table has a iUserID primary key and the Adult table has a corresponding iUserID foreign key. One of the columns in the Users table is strUsername, an e-mail address. I am trying to write a query that will allow me to search for an e-mail address for either adult that has a relation to the household. So I have two questions, assuming that all the values are not null, how can I do this? And two, in reality, iAdult2ID can be null, is it still possible to write a query to do this? Thanks for your help. Let me know if you need any more information.

    Read the article

  • Data Source Security Part 4

    - by Steve Felts
    So far, I have covered Client Identity and Oracle Proxy Session features, with WLS or database credentials.  This article will cover one more feature, Identify-based pooling.  Then, there is one more topic to cover - how these options play with transactions.Identity-based Connection Pooling An identity based pool creates a heterogeneous pool of connections.  This allows applications to use a JDBC connection with a specific DBMS credential by pooling physical connections with different DBMS credentials.  The DBMS credential is based on either the WebLogic user mapped to a database user or the database user directly, based on the “use database credentials” setting as described earlier. Using this feature enabled with “use database credentials” enabled seems to be what is proposed in the JDBC standard, basically a heterogeneous pool with users specified by getConnection(user, password). The allocation of connections is more complex if Enable Identity Based Connection Pooling attribute is enabled on the data source.  When an application requests a database connection, the WebLogic Server instance selects an existing physical connection or creates a new physical connection with requested DBMS identity. The following section provides information on how heterogeneous connections are created:1. At connection pool initialization, the physical JDBC connections based on the configured or default “initial capacity” are created with the configured default DBMS credential of the data source.2. An application tries to get a connection from a data source.3a. If “use database credentials” is not enabled, the user specified in getConnection is mapped to a DBMS credential, as described earlier.  If the credential map doesn’t have a matching user, the default DBMS credential is used from the datasource descriptor.3b. If “use database credentials” is enabled, the user and password specified in getConnection are used directly.4. The connection pool is searched for a connection with a matching DBMS credential.5. If a match is found, the connection is reserved and returned to the application.6. If no match is found, a connection is created or reused based on the maximum capacity of the pool: - If the maximum capacity has not been reached, a new connection is created with the DBMS credential, reserved, and returned to the application.- If the pool has reached maximum capacity, based on the least recently used (LRU) algorithm, a physical connection is selected from the pool and destroyed. A new connection is created with the DBMS credential, reserved, and returned to the application. It should be clear that finding a matching connection is more expensive than a homogeneous pool.  Destroying a connection and getting a new one is very expensive.  If you can use a normal homogeneous pool or one of the light-weight options (client identity or an Oracle proxy connection), those should be used instead of identity based pooling. Regardless of how physical connections are created, each physical connection in the pool has its own DBMS credential information maintained by the pool. Once a physical connection is reserved by the pool, it does not change its DBMS credential even if the current thread changes its WebLogic user credential and continues to use the same connection. To configure this feature, select Enable Identity Based Connection Pooling.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableIdentityBasedConnectionPooling.html  "Enable identity-based connection pooling for a JDBC data source" in Oracle WebLogic Server Administration Console Help. You must make the following changes to use Logging Last Resource (LLR) transaction optimization with Identity-based Pooling to get around the problem that multiple users will be accessing the associated transaction table.- You must configure a custom schema for LLR using a fully qualified LLR table name. All LLR connections will then use the named schema rather than the default schema when accessing the LLR transaction table.  - Use database specific administration tools to grant permission to access the named LLR table to all users that could access this table via a global transaction. By default, the LLR table is created during boot by the user configured for the connection in the data source. In most cases, the database will only allow access to this user and not allow access to mapped users. Connections within Transactions Now that we have covered the behavior of all of these various options, it’s time to discuss the exception to all of the rules.  When you get a connection within a transaction, it is associated with the transaction context on a particular WLS instance. When getting a connection with a data source configured with non-XA LLR or 1PC (using the JTS driver) with global transactions, the first connection obtained within the transaction is returned on subsequent connection requests regardless of the values of username/password specified and independent of the associated proxy user session, if any. The connection must be shared among all users of the connection when using LLR or 1PC. For XA data sources, the first connection obtained within the global transaction is returned on subsequent connection requests within the application server, regardless of the values of username/password specified and independent of the associated proxy user session, if any.  The connection must be shared among all users of the connection within a global transaction within the application server/JVM.

    Read the article

  • Insert record into mysql db with Entity Framework

    - by sanfra1983
    Hi, the problem is that it will insert a new record in a mysql table, I have already done the mapping of the mysql db and I have already done tests returning data and everything works. Now I read from a file, where there are queries written, I have them run me back and the result of true or false based on the final outcome of single query written to the file. Txt; I did this: using (var w = new demotestEntities ()) ( foreach (var l listaqueri) ( var p = we.CreateQuery <category> (l); we.SaveChanges (); result = true; ) ) but it does not work, I sense that it returns no errors, but neither the result given written in the query. txt file is as follows: INSERT INTO category (id, name) VALUES (null, 'test2') anyone can help me?

    Read the article

  • jQuery: How do I pass a value into an Ajax call?

    - by Legend
    I am updating some div as follows: for(var i = 0; i < data.length; i++) { var query = base_url + data[i]; $.ajax({ url: query, type: 'GET', dataType: 'jsonp', timeout: 2000, error: function() { self.html("Network Error"); }, success: function(json) { $("#li" + i).html("<img src='" + json.result.list[0].url + "' />") } }); } The value of i does not work inside the ajax call. I am trying to pass the value of i so that it can attach the element to the proper div. Can someone help me out?

    Read the article

  • Using an IN clause in Vb.net to save something to the database using SQL

    - by Rob
    I have a textbox and a button on a form. I wish to run a query (in Vb.Net) that will produce a query with the IN Values. Below is an example of my code myConnection = New SqlConnection("Data Source=sqldb\;Initial Catalog=Rec;Integrated Security=True") myConnection.Open() myCommand = New SqlCommand("UPDATE dbo.Recordings SET Status = 0 where RecID in ('" & txtRecID.Text & "') ", myConnection) ra = myCommand.ExecuteNonQuery() myConnection.Close() MsgBox("Done!", _ MsgBoxStyle.Information, "Done") When I enter a single value it works but when I enter values with commas it throws an error: "Conversion failed when converting the varchar value '1234,4567' to data type int." Could someone please help me to solve this or if there is an alternative way? Many Thanks

    Read the article

  • How can I make refactoring a priority for my team?

    - by Joseph Garland
    The codebase I work with daily has no automated tests, inconsistent naming and tons of comments like "Why is this here?", "Not sure if this is needed" or "This method isn't named right" and the code is littered with "Changelogs" despite the fact we use source control. Suffice it to say, our codebase could use refactoring. We always have tasks to fix bugs or add new features, so no time is put aside to refactor code to be better and more modular, and it doesn't seem to be a high priority. How can I demonstrate the value of refactoring such that it gets added to our task lists? Is it worth it to just refactor as I go, asking for forgiveness rather than permission?

    Read the article

  • Mod_rewrite with multiple variables

    - by Andrei
    Hello, I'm using a PHP script that dynamically generates transparent PNGs for use as CSS backgrounds from a query string that takes RGBa and HSLa values. The original script can be found here, I've only added HSLa support. Because background URLs with PHP query strings aren't very pretty, and because it seems to break the IE 6 transparent PNG hack, I thought of using mod_rewrite to allow the script to be called when a .png with this syntax is called : /assets/colors/h[0-360 value]_s[0-100 value]_l[0-100 value]_a[0-100 value].png which would be rewritten to : /assets/colors.php?h=[0-360 value]&s=[0-100 value]&l=[0-100 value]&a=[0-100 value] Here's the issues I'm encountering : passing multiple variables with mod_rewrite using an underscore as a delimiter I know this could be done by passing a single variable and then exploding it in the PHP script, however I would prefer it to be done by Apache. Thanks in advance and if anyone wants my HSLa enabled version of the script just ask. Anyway I recommend you check it out on it's author's website.

    Read the article

  • Find groups with both validated, unvalidated users

    - by Matchu
    (Not my real MySQL schema, but illustrates what needs done.) Users can belong to many groups, and groups have many users. users: id INT validated TINYINT(1) groups: id INT name VARCHAR(20) groups_users: group_id INT user_id INT I need to find groups that contain both validated and unvalidated users (validated being 1 or 0, respectively), in order to perform a specific manual maintenance task. There are thousands of users, all belong to at least one group, but a group usually only has 2-5 users. This is a live production server, so I could probably craft a query myself, but the last one I tried took a matter of minutes before I killed it. (I'm not one of those brilliant SQL wizards.) I suppose I could take the server down for maintenance, but, if possible, a query that gets this job done in a matter of seconds would be fantastic. Thanks!

    Read the article

  • Sharing files with Android devices (How do I mount an HP Touchpad, Cyanogen Mod 9?)

    - by C.Werthschulte
    I've recently installed Cyanogen Mod 9 on my HP Touchpad tablet, but I'm encountering problems when trying to access it from my Ubuntu laptop (Ubuntu 11.10, Gnome-Shell, Nautilus). I've first tried accessing it via PTP as suggested here. Ubuntu will recognize the Touchpad as a digicam and only grant me access to two directories: "DCIM" and "Pictures". I then tried accessing the tablet via MTP using this post on OMGUbuntu!. Ubuntu will connect to the tablet, but only grant me access to a folder named "Playlists". I'm a bit clueless as to what I'm doing wrong and would very much appreciate any help or hints. Many thanks!

    Read the article

  • Picasa(installed using Wine) unable to access some NTFS partitions again in Ubuntu 14.04?

    - by Tom
    Recently I installed Picasa in my Ubuntu 14.04 using Wine. After performing a Wine configuration I was able to access all the images in the NTFS partitions on my hard disk. I performed Wine configuration as said in Why can an application installed using Wine not access NTFS partitions?. After 3 days, it became unable to access 2 NTFS partitions. But I can access 1 NTFS partition(named E, see image below). As you can see in the screenshot, I am unable to expand both D and F. How can I fix this issue permanently ?

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >