Search Results

Search found 17968 results on 719 pages for 'query tuning'.

Page 242/719 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • Windows DNS Server 2008 R2 fallaciously returns SERVFAIL

    - by Easter Sunshine
    I have a Windows 2008 R2 domain controller which is also a DNS server. When resolving certain TLDs, it returns a SERVFAIL: $ dig bogus. ; <<>> DiG 9.8.1 <<>> bogus. ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 31919 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;bogus. IN A I get the same result for a real TLD like com. when querying the DC as shown above. Compare to a BIND server that is working as expected: $ dig bogus. @128.59.59.70 ; <<>> DiG 9.8.1 <<>> bogus. @128.59.59.70 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 30141 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;bogus. IN A ;; AUTHORITY SECTION: . 10800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2012012501 1800 900 604800 86400 ;; Query time: 18 msec ;; SERVER: 128.59.59.70#53(128.59.59.70) ;; WHEN: Wed Jan 25 14:09:14 2012 ;; MSG SIZE rcvd: 98 Similarly, when I query my Windows DNS server with dig . any, I get a SERVFAIL but the BIND servers return the root zone as expected. This sounds similar to the issue described in http://support.microsoft.com/kb/968372 except I am using two forwarders (128.59.59.70 from above as well as 128.59.62.10) and falling back to root hints so the preconditions to expose the issue are not the same. Nevertheless, I also applied the MaxCacheTTL registry fix as described and restarted DNS and the whole server as well but the problem persists. The problem occurs on all domain controllers in this domain and has occurred since half a year ago, even though the servers are getting automatic Windows updates. EDIT Here is a debug log. The client is 160.39.114.110, which is my workstation. 1/25/2012 2:16:01 PM 0E08 PACKET 000000001EA6BFD0 UDP Rcv 160.39.114.110 2e94 Q [0001 D NOERROR] A (5)bogus(0) UDP question info at 000000001EA6BFD0 Socket = 508 Remote addr 160.39.114.110, port 49710 Time Query=1077016, Queued=0, Expire=0 Buf length = 0x0fa0 (4000) Msg length = 0x0017 (23) Message: XID 0x2e94 Flags 0x0100 QR 0 (QUESTION) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 0 Z 0 CD 0 AD 0 RCODE 0 (NOERROR) QCOUNT 1 ACOUNT 0 NSCOUNT 0 ARCOUNT 0 QUESTION SECTION: Offset = 0x000c, RR count = 0 Name "(5)bogus(0)" QTYPE A (1) QCLASS 1 ANSWER SECTION: empty AUTHORITY SECTION: empty ADDITIONAL SECTION: empty 1/25/2012 2:16:01 PM 0E08 PACKET 000000001EA6BFD0 UDP Snd 160.39.114.110 2e94 R Q [8281 DR SERVFAIL] A (5)bogus(0) UDP response info at 000000001EA6BFD0 Socket = 508 Remote addr 160.39.114.110, port 49710 Time Query=1077016, Queued=0, Expire=0 Buf length = 0x0fa0 (4000) Msg length = 0x0017 (23) Message: XID 0x2e94 Flags 0x8182 QR 1 (RESPONSE) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 1 Z 0 CD 0 AD 0 RCODE 2 (SERVFAIL) QCOUNT 1 ACOUNT 0 NSCOUNT 0 ARCOUNT 0 QUESTION SECTION: Offset = 0x000c, RR count = 0 Name "(5)bogus(0)" QTYPE A (1) QCLASS 1 ANSWER SECTION: empty AUTHORITY SECTION: empty ADDITIONAL SECTION: empty Every option in the debug log box was checked except "filter by IP". By contrast, when I query, say, accounts.google.com, I can see the DNS server go out to its forwarder (128.59.59.70, for example). In this case, I didn't see any packets going out from my DNS server even though bogus. was not in the cache (the debug log was already running and this is the first time I queried this server for bogus. or any TLD). It just returned SERVFAIL without consulting any other DNS server, as in the Microsoft KB article linked above.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 2 – Table per Type (TPT)

    - by mortezam
    In the previous blog post you saw that there are three different approaches to representing an inheritance hierarchy and I explained Table per Hierarchy (TPH) as the default mapping strategy in EF Code First. We argued that the disadvantages of TPH may be too serious for our design since it results in denormalized schemas that can become a major burden in the long run. In today’s blog post we are going to learn about Table per Type (TPT) as another inheritance mapping strategy and we'll see that TPT doesn’t expose us to this problem. Table per Type (TPT)Table per Type is about representing inheritance relationships as relational foreign key associations. Every class/subclass that declares persistent properties—including abstract classes—has its own table. The table for subclasses contains columns only for each noninherited property (each property declared by the subclass itself) along with a primary key that is also a foreign key of the base class table. This approach is shown in the following figure: For example, if an instance of the CreditCard subclass is made persistent, the values of properties declared by the BillingDetail base class are persisted to a new row of the BillingDetails table. Only the values of properties declared by the subclass (i.e. CreditCard) are persisted to a new row of the CreditCards table. The two rows are linked together by their shared primary key value. Later, the subclass instance may be retrieved from the database by joining the subclass table with the base class table. TPT Advantages The primary advantage of this strategy is that the SQL schema is normalized. In addition, schema evolution is straightforward (modifying the base class or adding a new subclass is just a matter of modify/add one table). Integrity constraint definition are also straightforward (note how CardType in CreditCards table is now a non-nullable column). Another much more important advantage is the ability to handle polymorphic associations (a polymorphic association is an association to a base class, hence to all classes in the hierarchy with dynamic resolution of the concrete class at runtime). A polymorphic association to a particular subclass may be represented as a foreign key referencing the table of that particular subclass. Implement TPT in EF Code First We can create a TPT mapping simply by placing Table attribute on the subclasses to specify the mapped table name (Table attribute is a new data annotation and has been added to System.ComponentModel.DataAnnotations namespace in CTP5): public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } [Table("BankAccounts")] public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } [Table("CreditCards")] public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } If you prefer fluent API, then you can create a TPT mapping by using ToTable() method: protected override void OnModelCreating(ModelBuilder modelBuilder) {     modelBuilder.Entity<BankAccount>().ToTable("BankAccounts");     modelBuilder.Entity<CreditCard>().ToTable("CreditCards"); } Generated SQL For QueriesLet’s take an example of a simple non-polymorphic query that returns a list of all the BankAccounts: var query = from b in context.BillingDetails.OfType<BankAccount>() select b; Executing this query (by invoking ToList() method) results in the following SQL statements being sent to the database (on the bottom, you can also see the result of executing the generated query in SQL Server Management Studio): Now, let’s take an example of a very simple polymorphic query that requests all the BillingDetails which includes both BankAccount and CreditCard types: projects some properties out of the base class BillingDetail, without querying for anything from any of the subclasses: var query = from b in context.BillingDetails             select new { b.BillingDetailId, b.Number, b.Owner }; -- var query = from b in context.BillingDetails select b; This LINQ query seems even more simple than the previous one but the resulting SQL query is not as simple as you might expect: -- As you can see, EF Code First relies on an INNER JOIN to detect the existence (or absence) of rows in the subclass tables CreditCards and BankAccounts so it can determine the concrete subclass for a particular row of the BillingDetails table. Also the SQL CASE statements that you see in the beginning of the query is just to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type) TPT ConsiderationsEven though this mapping strategy is deceptively simple, the experience shows that performance can be unacceptable for complex class hierarchies because queries always require a join across many tables. In addition, this mapping strategy is more difficult to implement by hand— even ad-hoc reporting is more complex. This is an important consideration if you plan to use handwritten SQL in your application (For ad hoc reporting, database views provide a way to offset the complexity of the TPT strategy. A view may be used to transform the table-per-type model into the much simpler table-per-hierarchy model.) SummaryIn this post we learned about Table per Type as the second inheritance mapping in our series. So far, the strategies we’ve discussed require extra consideration with regard to the SQL schema (e.g. in TPT, foreign keys are needed). This situation changes with the Table per Concrete Type (TPC) that we will discuss in the next post. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • How do I add a column that displays the number of distinct rows to this query?

    - by Fake Code Monkey Rashid
    Hello good people! I don't know how to ask my question clearly so I'll just show you the money. To start with, here's a sample table: CREATE TABLE sandbox ( id integer NOT NULL, callsign text NOT NULL, this text NOT NULL, that text NOT NULL, "timestamp" timestamp with time zone DEFAULT now() NOT NULL ); CREATE SEQUENCE sandbox_id_seq START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE CACHE 1; ALTER SEQUENCE sandbox_id_seq OWNED BY sandbox.id; SELECT pg_catalog.setval('sandbox_id_seq', 14, true); ALTER TABLE sandbox ALTER COLUMN id SET DEFAULT nextval('sandbox_id_seq'::regclass); INSERT INTO sandbox VALUES (1, 'alpha', 'foo', 'qux', '2010-12-29 16:51:09.897579+00'); INSERT INTO sandbox VALUES (2, 'alpha', 'foo', 'qux', '2010-12-29 16:51:36.108867+00'); INSERT INTO sandbox VALUES (3, 'bravo', 'bar', 'quxx', '2010-12-29 16:52:36.370507+00'); INSERT INTO sandbox VALUES (4, 'bravo', 'foo', 'quxx', '2010-12-29 16:52:47.584663+00'); INSERT INTO sandbox VALUES (5, 'charlie', 'foo', 'corge', '2010-12-29 16:53:00.742356+00'); INSERT INTO sandbox VALUES (6, 'delta', 'foo', 'qux', '2010-12-29 16:53:10.884721+00'); INSERT INTO sandbox VALUES (7, 'alpha', 'foo', 'corge', '2010-12-29 16:53:21.242904+00'); INSERT INTO sandbox VALUES (8, 'alpha', 'bar', 'corge', '2010-12-29 16:54:33.318907+00'); INSERT INTO sandbox VALUES (9, 'alpha', 'baz', 'quxx', '2010-12-29 16:54:38.727095+00'); INSERT INTO sandbox VALUES (10, 'alpha', 'bar', 'qux', '2010-12-29 16:54:46.237294+00'); INSERT INTO sandbox VALUES (11, 'alpha', 'baz', 'qux', '2010-12-29 16:54:53.891606+00'); INSERT INTO sandbox VALUES (12, 'alpha', 'baz', 'corge', '2010-12-29 16:55:39.596076+00'); INSERT INTO sandbox VALUES (13, 'alpha', 'baz', 'corge', '2010-12-29 16:55:44.834019+00'); INSERT INTO sandbox VALUES (14, 'alpha', 'foo', 'qux', '2010-12-29 16:55:52.848792+00'); ALTER TABLE ONLY sandbox ADD CONSTRAINT sandbox_pkey PRIMARY KEY (id); Here's the current SQL query I have: SELECT * FROM ( SELECT DISTINCT ON (this, that) id, this, that, timestamp FROM sandbox WHERE callsign = 'alpha' AND CAST(timestamp AS date) = '2010-12-29' ) playground ORDER BY timestamp DESC This is the result it gives me: id this that timestamp ----------------------------------------------------- 14 foo qux 2010-12-29 16:55:52.848792+00 13 baz corge 2010-12-29 16:55:44.834019+00 11 baz qux 2010-12-29 16:54:53.891606+00 10 bar qux 2010-12-29 16:54:46.237294+00 9 baz quxx 2010-12-29 16:54:38.727095+00 8 bar corge 2010-12-29 16:54:33.318907+00 7 foo corge 2010-12-29 16:53:21.242904+00 This is what I want to see: id this that timestamp count ------------------------------------------------------------- 14 foo qux 2010-12-29 16:55:52.848792+00 3 13 baz corge 2010-12-29 16:55:44.834019+00 2 11 baz qux 2010-12-29 16:54:53.891606+00 1 10 bar qux 2010-12-29 16:54:46.237294+00 1 9 baz quxx 2010-12-29 16:54:38.727095+00 1 8 bar corge 2010-12-29 16:54:33.318907+00 1 7 foo corge 2010-12-29 16:53:21.242904+00 1 EDIT: I'm using PostgreSQL 9.0.* (if that helps any).

    Read the article

  • How can you make an emacs macro wait for cscope query results?

    - by Sudhanshu
    I am trying to write a macro which calls cscope-find-functions-calling-this-function on each and every tag in a file displayed in the *Tags List* buffer (created by list-tags command). This should create a buffer which contains list of all functions calling a set of functions defined in a certain file. This is the sequence of keystrokes: 1. <f11> ;; cscope-find-functions-calling-this-function 2. RET ;; newline [shows results of cscope in a split window] 3. C-x C-p ;; mark-page 4. C-x C-x ;; icicle-exchange-point-and-mark 5. <up> ;; previous-line 6. <end> ;; end-of-line [region to copy has been marked] 7. <f7> ;; append-results-to-buffer 8. C-x ESC O ;; [move back to split window on the right] 9. C-x b ;; icicle-buffer [Switch back to *Tags List* buffer] 10. *Tags ;; self-insert-command * 5 11. SPC ;; self-insert-command 12. List* ;; self-insert-command * 5 13. RET ;; newline 14 . <down> ;; next-line [Position point on next tag in the list] Problem: I get no results in the buffer, and I found out that's because Step 3-7 execute even before cscope prints the results of query made on Steps 1-2. I can insert a pause in the macro by using C-x q, but I'd rather like the macro to wait after Step 2, until cscope has returned with the results and only then continue further. I suspect this is not possible through a macro, maybe a LISP function... I'm not a lisp expert myself. Can someone please help? Thanks! Details: I have Icicles installed so by default I get word at point in current buffer as input in minibuffer. F11 is bound to cscope-find-functions-calling-this-function windmove is installed and C-x (C-x ESC o - as shown below) takes you to the right window. F7 is bound to append-results-to-buffer which is defined as: (defun append-results-to-buffer () (interactive) (append-to-buffer (get-buffer-create "c1") (point) (mark))) This function just appends the currently marked region to a buffer named "c1".

    Read the article

  • How can I iterate over a collection of objects returned by a LINQ-to-XML query?

    - by billmaya
    I've got this XML: <BillingLog> <BillingItem> <date-and-time>2003-11-04</date-and-time> <application-name>Billing Service</application-name> <severity>Warning</severity> <process-id>123</process-id> <description>Timed out on a connection</description> <detail>Timed out after three retries.</detail> </BillingItem> <BillingItem> <date-and-time>2010-05-15</date-and-time> <application-name>Callback Service</application-name> <severity>Error</severity> <process-id>456</process-id> <description>Unable to process callback</description> <detail>Reconciliation timed out after two retries.</detail> </BillingItem> </BillingLog> That I want to project using LINQ-to-XML into a collection of BillingItem objects contained in a single BillingLog object. public class BillingLog { public IEnumerable<BillingItem> items { get; set; } } public class BillingItem { public string Date { get; set; } public string ApplicationName { get; set; } public string Severity { get; set; } public int ProcessId { get; set; } public string Description { get; set; } public string Detail { get; set;} } This is the LINQ query that I'm using to project the XML (which is contained in the string variable source). XDocument xdoc = XDocument.Parse(source); var log = from i in xdoc.Elements("BillingLog") select new BillingLog { items = from j in i.Descendants("BillingItem") select new BillingItem { Date = (string)j.Element("date-and-time"), ApplicationName = (string)j.Element("application-name"), Severity = (string)j.Element("severity"), ProcessId = (int)j.Element("process-id"), Description = (string)j.Element("description"), Detail = (string)j.Element("detail") } }; When I try and iterate over the objects in log using foreach. foreach (BillingItem item in log) { Console.WriteLine ("{0} | {1} | {2} | {3} | {4} | {5}", item.Date, item.ApplicationName, item.Severity, item.ProcessId.ToString(), item.Description, item.Detail); } I get the following error message from LINQPad. Cannot convert type 'UserQuery.BillingLog' to 'UserQuery.BillingItem' Thanks in advance.

    Read the article

  • Look over my C# SQLite Query, what am I doing wrong?

    - by CODe
    I'm writing a WinForms database application using SQLite and C#. I have a sqlite query that is failing, and I'm unsure as to where I'm going wrong, as I've tried everything I could think of. public DataTable searchSubs(String businessName, String contactName) { string SQL = null; if ((businessName != null && businessName != "") && (contactName != null && contactName != "")) { // provided business name and contact name for search SQL = "SELECT * FROM SUBCONTRACTOR WHERE BusinessName LIKE %@BusinessName% AND Contact LIKE %@ContactName%"; } else if ((businessName != null && businessName != "") && (contactName == null || contactName == "")) { // provided business name only for search SQL = "SELECT * FROM SUBCONTRACTOR WHERE BusinessName LIKE %@BusinessName%"; } else if ((businessName == null || businessName == "") && (contactName != null && contactName != "")) { // provided contact name only for search SQL = "SELECT * FROM SUBCONTRACTOR WHERE Contact LIKE %@ContactName%"; } else if ((businessName == null || businessName == "") && (contactName == null || contactName == "")) { // provided no search information SQL = "SELECT * FROM SUBCONTRACTOR"; } SQLiteCommand cmd = new SQLiteCommand(SQL); cmd.Parameters.AddWithValue("@BusinessName", businessName); cmd.Parameters.AddWithValue("@ContactName", contactName); cmd.Connection = connection; SQLiteDataAdapter da = new SQLiteDataAdapter(cmd); DataSet ds = new DataSet(); try { da.Fill(ds); DataTable dt = ds.Tables[0]; return dt; } catch (Exception e) { MessageBox.Show(e.ToString()); return null; } finally { cmd.Dispose(); connection.Close(); } } I continually get an error saying that it is failing near the %'s. That's all fine and dandy, but I guess I'm structuring it wrong, but I don't know where! I tried adding apostrophes around the "like" variables, like this: SQL = "SELECT * FROM SUBCONTRACTOR WHERE Contact LIKE '%@ContactName%'"; and quite honestly, that is all I can think of. Anyone have any ideas?

    Read the article

  • How to query across many-to-many association in NHibernate?

    - by Splash
    I have two entities, Post and Tag. The Post entity has a collection of Tags which represents a many-to-many join between the two (that is, each post can have any number of tags and each tag can be associated with any number of posts). I am trying to retrieve all Posts which have a given tag. However, I seem to be unable to get this query right. I essentially want something which means the same as the following pseudo-HQL: from Posts p where p.Tags contains (from Tags t where t.Name = :tagName) order by p.DateTime The only thing I've found which even approaches this is a post by Ayende. However, his approach requires the entity on the other side (in my case, Tag) to have a collection showing the other end of the many-to-many. I don't have this and don't really wish to have it. I find it hard to believe this can't be done. What am I missing? My entities & mappings look like this (simplified): public class Post { public virtual int Id { get; set; } public virtual string Title { get; set; } private IList<Tag> tags = new List<Tag>(); public virtual IEnumerable<Tag> Tags { get { return tags; } } public virtual void AddTag(Tag tag) { this.tags.Add(tag); } } public class PostMap : ClassMap<Post> { public PostMap() { Id(x => x.Id).GeneratedBy.HiLo("99"); Map(x => x.Title); HasManyToMany(x => x.Tags); } } // ---- public class Tag { public virtual int Id { get; set; } public virtual string Name { get; set; } } public class TagMap : ClassMap<Tag> { public TagMap () { Id(x => x.Id).GeneratedBy.HiLo("99"); Map(x => x.Name).Unique(); } }

    Read the article

  • How to reflect over T to build an expression tree for a query?

    - by Alex
    Hi all, I'm trying to build a generic class to work with entities from EF. This class talks to repositories, but it's this class that creates the expressions sent to the repositories. Anyway, I'm just trying to implement one virtual method that will act as a base for common querying. Specifically, it will accept a an int and it only needs to perform a query over the primary key of the entity in question. I've been screwing around with it and I've built a reflection which may or may not work. I say that because I get a NotSupportedException with a message of LINQ to Entities does not recognize the method 'System.Object GetValue(System.Object, System.Object[])' method, and this method cannot be translated into a store expression. So then I tried another approach and it produced the same exception but with the error of The LINQ expression node type 'ArrayIndex' is not supported in LINQ to Entities. I know it's because EF will not parse the expression the way L2S will. Anyway, I'm hopping someone with a bit more experience can point me into the right direction on this. I'm posting the entire class with both attempts I've made. public class Provider<T> where T : class { protected readonly Repository<T> Repository = null; private readonly string TEntityName = typeof(T).Name; [Inject] public Provider( Repository<T> Repository) { this.Repository = Repository; } public virtual void Add( T TEntity) { this.Repository.Insert(TEntity); } public virtual T Get( int PrimaryKey) { // The LINQ expression node type 'ArrayIndex' is not supported in // LINQ to Entities. return this.Repository.Select( t => (((int)(t as EntityObject).EntityKey.EntityKeyValues[0].Value) == PrimaryKey)).Single(); // LINQ to Entities does not recognize the method // 'System.Object GetValue(System.Object, System.Object[])' method, // and this method cannot be translated into a store expression. return this.Repository.Select( t => (((int)t.GetType().GetProperties().Single( p => (p.Name == (this.TEntityName + "Id"))).GetValue(t, null)) == PrimaryKey)).Single(); } public virtual IList<T> GetAll() { return this.Repository.Select().ToList(); } protected virtual void Save() { this.Repository.Update(); } }

    Read the article

  • Adding to database. No repeat on refresh

    - by kevstarlive
    I have this code: Episode.php <?$feedback = new feedback; $articles = $feedback->fetch_all(); if (isset($_POST['name'], $_POST['post'])) { $cast = $_GET['id']; $name = $_POST['name']; $email = $_POST['email']; $post = nl2br ($_POST['post']); $ipaddress = $_SERVER['REMOTE_ADDR']; if (empty($name) or empty($post)) { $error = 'All Fields Are Required!'; }else{ $query = $pdo->prepare('INSERT INTO comments (cast, name, email, post, ipaddress) VALUES(?, ?, ?, ?, ?)'); $query->bindValue(1, $cast); $query->bindValue(2, $name); $query->bindValue(3, $email); $query->bindValue(4, $post); $query->bindValue(5, $ipaddress); $query->execute(); } }?> <div align="center"> <strong>Give us your feedback?</strong><br /><br /> <?php if (isset($error)) { ?> <small style="color:#aa0000;"><?php echo $error; ?></small><br /><br /> <?php } ?> <form action="episode.php?id=<?php echo $data['cast_id']; ?>" method="post" autocomplete="off" enctype="multipart/form-data"> <input type="text" name="name" placeholder="Name" /> / <input type="text" name="email" placeholder="Email" /><small style="color:#aa0000;">*</small><br /><br /> <textarea rows="10" cols="50" name="post" placeholder="Comment"></textarea><br /><br /> <input type="submit" onclick="myFunction()" value="Add Comment" /> <br /><br /> <small style="color:#aa0000;">* <b>Email will not be displayed publicly</b></small><br /> </form> </div> Include.php class feedback { public function fetch_all(){ global $pdo; $query = $pdo->prepare("SELECT * FROM comments"); $query->bindValue(1, $cast); $query->execute(); return $query->fetchAll(); } } This code updates to the database as it is suppose to. But after submission it reloads the current page as mentioned in the form action. But when I refresh the page to see the comment being added it asks to re submit. If I hit submit then the comment adds again. How can I stop this from happening? Maybe I could hide the comment box and display a thank you message but that would not stop a repeat entry. Please help. Thank you. Kev

    Read the article

  • Curious about IObservable? Here’s a quick example to get you started!

    - by Roman Schindlauer
    Have you heard about IObservable/IObserver support in Microsoft StreamInsight 1.1? Then you probably want to try it out. If this is your first incursion into the IObservable/IObserver pattern, this blog post is for you! StreamInsight 1.1 introduced the ability to use IEnumerable and IObservable objects as event sources and sinks. The IEnumerable case is pretty straightforward, since many data collections are already surfacing as this type. This was already covered by Colin in his blog. Creating your own IObservable event source is a little more involved but no less exciting – here is a primer: First, let’s look at a very simple Observable data source. All it does is publish an integer in regular time periods to its registered observers. (For more information on IObservable, see http://msdn.microsoft.com/en-us/library/dd990377.aspx ). sealed class RandomSubject : IObservable<int>, IDisposable {     private bool _done;     private readonly List<IObserver<int>> _observers;     private readonly Random _random;     private readonly object _sync;     private readonly Timer _timer;     private readonly int _timerPeriod;       /// <summary>     /// Random observable subject. It produces an integer in regular time periods.     /// </summary>     /// <param name="timerPeriod">Timer period (in milliseconds)</param>     public RandomSubject(int timerPeriod)     {         _done = false;         _observers = new List<IObserver<int>>();         _random = new Random();         _sync = new object();         _timer = new Timer(EmitRandomValue);         _timerPeriod = timerPeriod;         Schedule();     }       public IDisposable Subscribe(IObserver<int> observer)     {         lock (_sync)         {             _observers.Add(observer);         }         return new Subscription(this, observer);     }       public void OnNext(int value)     {         lock (_sync)         {             if (!_done)             {                 foreach (var observer in _observers)                 {                     observer.OnNext(value);                 }             }         }     }       public void OnError(Exception e)     {         lock (_sync)         {             foreach (var observer in _observers)             {                 observer.OnError(e);             }             _done = true;         }     }       public void OnCompleted()     {         lock (_sync)         {             foreach (var observer in _observers)             {                 observer.OnCompleted();             }             _done = true;         }     }       void IDisposable.Dispose()     {         _timer.Dispose();     }       private void Schedule()     {         lock (_sync)         {             if (!_done)             {                 _timer.Change(_timerPeriod, Timeout.Infinite);             }         }     }       private void EmitRandomValue(object _)     {         var value = (int)(_random.NextDouble() * 100);         Console.WriteLine("[Observable]\t" + value);         OnNext(value);         Schedule();     }       private sealed class Subscription : IDisposable     {         private readonly RandomSubject _subject;         private IObserver<int> _observer;           public Subscription(RandomSubject subject, IObserver<int> observer)         {             _subject = subject;             _observer = observer;         }           public void Dispose()         {             IObserver<int> observer = _observer;             if (null != observer)             {                 lock (_subject._sync)                 {                     _subject._observers.Remove(observer);                 }                 _observer = null;             }         }     } }   So far, so good. Now let’s write a program that consumes data emitted by the observable as a stream of point events in a Streaminsight query. First, let’s define our payload type: class Payload {     public int Value { get; set; }       public override string ToString()     {         return "[StreamInsight]\tValue: " + Value.ToString();     } }   Now, let’s write the program. First, we will instantiate the observable subject. Then we’ll use the ToPointStream() method to consume it as a stream. We can now write any query over the source - here, a simple pass-through query. class Program {     static void Main(string[] args)     {         Console.WriteLine("Starting observable source...");         using (var source = new RandomSubject(500))         {             Console.WriteLine("Started observable source.");             using (var server = Server.Create("Default"))             {                 var application = server.CreateApplication("My Application");                   var stream = source.ToPointStream(application,                     e => PointEvent.CreateInsert(DateTime.Now, new Payload { Value = e }),                     AdvanceTimeSettings.StrictlyIncreasingStartTime,                     "Observable Stream");                   var query = from e in stream                             select e;                   [...]   We’re done with consuming input and querying it! But you probably want to see the output of the query. Did you know you can turn a query into an observable subject as well? Let’s do precisely that, and exploit the Reactive Extensions for .NET (http://msdn.microsoft.com/en-us/devlabs/ee794896.aspx) to quickly visualize the output. Notice we’re subscribing “Console.WriteLine()” to the query, a pattern you may find useful for quick debugging of your queries. Reminder: you’ll need to install the Reactive Extensions for .NET (Rx for .NET Framework 4.0), and reference System.CoreEx and System.Reactive in your project.                 [...]                   Console.ReadLine();                 Console.WriteLine("Starting query...");                 using (query.ToObservable().Subscribe(Console.WriteLine))                 {                     Console.WriteLine("Started query.");                     Console.ReadLine();                     Console.WriteLine("Stopping query...");                 }                 Console.WriteLine("Stopped query.");             }             Console.ReadLine();             Console.WriteLine("Stopping observable source...");             source.OnCompleted();         }         Console.WriteLine("Stopped observable source.");     } }   We hope this blog post gets you started. And for bonus points, you can go ahead and rewrite the observable source (the RandomSubject class) using the Reactive Extensions for .NET! The entire sample project is attached to this article. Happy querying! Regards, The StreamInsight Team

    Read the article

  • Creating PHP Forms with show/hide functionality [migrated]

    - by ronquiq
    I want to create two reports and submit the report data to database by using two functions defined in a class: Here I have two buttons: "Create ES" and "Create RP". Rightnow, my forms are working fine, I can insert data successfully, but the problem was when I click on submit after filling the form data, the content is hiding and displays the fist div content "cs_content" and again I need to onclick to submit again. Could anyone give a solution for this. Requirement : When I click on "Create CS", I should be able to fill the form and submit data successfully with a message within "cs_content" and any form input errors, the errors should display within "cs_content". When I click on "Create RP", I should be able to fill the form and submit data successfully with a message within "rp_content" and any form input errors, the errors should display within "rp_content". home.php <?php require 'classes/class.report.php'; $report = new Report($db); ?> <html> <head> <script src="js/jqueryv1.10.2.js"></script> <script> $ (document).ready(function () { //$("#cs_content").show(); $('#cs').click(function () { $('#cs_content').fadeIn('slow'); $('#rp_content').hide(); }); $('#rp').click(function () { $('#rp_content').fadeIn('slow'); $('#cs_content').hide(); }); }); </script> </head> <body> <div class="container2"> <div style="margin:0px 0px;padding:3px 217px;overflow:hidden;"> <div id="cs" style="float:left;margin:0px 0px;padding:7px;"><input type="button" value="CREATE CS"></div> <div id="rp" style="float:left;margin:0px 0px;padding:7px;"><input type="button" value="CREATE RP"></div><br> </div> <div id="cs_content"> <?php $report->create_cs_report(); ?> </div> <div id="rp_content" style="display:none;"> <?php $report->create_rp_report(); ?> </div> </div> </body> </html> class.report.php <?php class Report { private $db; public function __construct($database){ $this->db = $database; } public function create_cs_report() { if (isset($_POST['create_es_report'])) { $report_name = htmlentities($_POST['report_name']); $from_address = htmlentities($_POST['from_address']); $subject = htmlentities($_POST['subject']); $reply_to = htmlentities($_POST['reply_to']); if (empty($_POST['report_name']) || empty($_POST['from_address']) || empty($_POST['subject']) || empty($_POST['reply_to'])) { $errors[] = '<span class="error">All fields are required.</span>'; } else { if (isset($_POST['report_name']) && empty($_POST['report_name'])) { $errors[] = '<span class="error">Report Name is required</span>'; } else if (!ctype_alnum($_POST['report_name'])) { $errors[] = '<span class="error">Report Name: Whitespace is not allowed, only alphabets and numbers are required</span>'; } if (isset($_POST['from_address']) && empty($_POST['from_address'])) { $errors[] = '<span class="error">From address is required</span>'; } else if (filter_var($_POST['from_address'], FILTER_VALIDATE_EMAIL) === false) { $errors[] = '<span class="error">Please enter a valid From address</span>'; } if (isset($_POST['subject']) && empty($_POST['subject'])) { $errors[] = '<span class="error">Subject is required</span>'; } else if (!ctype_alnum($_POST['subject'])) { $errors[] = '<span class="error">Subject: Whitespace is not allowed, only alphabets and numbers are required</span>'; } if (isset($_POST['reply_to']) && empty($_POST['reply_to'])) { $errors[] = '<span class="error">Reply To is required</span>'; } else if (filter_var($_POST['reply_to'], FILTER_VALIDATE_EMAIL) === false) { $errors[] = '<span class="error">Please enter a valid Reply-To address</span>'; } } if (empty($errors) === true) { $query = $this->db->prepare("INSERT INTO report(report_name, from_address, subject, reply_to) VALUES (?, ?, ?, ?) "); $query->bindValue(1, $report_name); $query->bindValue(2, $from_address); $query->bindValue(3, $subject); $query->bindValue(4, $reply_to); try { $query->execute(); } catch(PDOException $e) { die($e->getMessage()); } header('Location:home.php?success'); exit(); } } if (isset($_GET['success']) && empty($_GET['success'])) { header('Location:home.php'); echo '<span class="error">Report is succesfully created</span>'; } ?> <form action="" method="POST" accept-charset="UTF-8"> <div style="font-weight:bold;padding:17px 80px;text-decoration:underline;">Section A</div> <table class="create_report"> <tr><td><label>Report Name</label><span style="color:#A60000">*</span></td> <td><input type="text" name="report_name" required placeholder="Name of the report" value="<?php if(isset($_POST["report_name"])) echo $report_name; ?>" size="30" maxlength="30"> </td></tr> <tr><td><label>From</label><span style="color:#A60000">*</span></td> <td><input type="text" name="from_address" required placeholder="From address" value="<?php if(isset($_POST["from_address"])) echo $from_address; ?>" size="30"> </td></tr> <tr><td><label>Subject</label><span style="color:#A60000">*</span></td> <td><input type="text" name="subject" required placeholder="Subject" value="<?php if(isset($_POST["subject"])) echo $subject; ?>" size="30"> </td></tr> <tr><td><label>Reply To</label><span style="color:#A60000">*</span></td> <td><input type="text" name="reply_to" required placeholder="Reply address" value="<?php if(isset($_POST["reply_to"])) echo $reply_to; ?>" size="30"> </td></tr> <tr><td><input type="submit" value="create report" style="background:#8AC007;color:#080808;padding:6px;" name="create_es_report"></td></tr> </table> </form> <?php //IF THERE ARE ERRORS, THEY WOULD BE DISPLAY HERE if (empty($errors) === false) { echo '<div>' . implode('</p><p>', $errors) . '</div>'; } } public function create_rp_report() { if (isset($_POST['create_rp_report'])) { $report_name = htmlentities($_POST['report_name']); $to_address = htmlentities($_POST['to_address']); $subject = htmlentities($_POST['subject']); $reply_to = htmlentities($_POST['reply_to']); if (empty($_POST['report_name']) || empty($_POST['to_address']) || empty($_POST['subject']) || empty($_POST['reply_to'])) { $errors[] = '<span class="error">All fields are required.</span>'; } else { if (isset($_POST['report_name']) && empty($_POST['report_name'])) { $errors[] = '<span class="error">Report Name is required</span>'; } else if (!ctype_alnum($_POST['report_name'])) { $errors[] = '<span class="error">Report Name: Whitespace is not allowed, only alphabets and numbers are required</span>'; } if (isset($_POST['to_address']) && empty($_POST['to_address'])) { $errors[] = '<span class="error">to address is required</span>'; } else if (filter_var($_POST['to_address'], FILTER_VALIDATE_EMAIL) === false) { $errors[] = '<span class="error">Please enter a valid to address</span>'; } if (isset($_POST['subject']) && empty($_POST['subject'])) { $errors[] = '<span class="error">Subject is required</span>'; } else if (!ctype_alnum($_POST['subject'])) { $errors[] = '<span class="error">Subject: Whitespace is not allowed, only alphabets and numbers are required</span>'; } if (isset($_POST['reply_to']) && empty($_POST['reply_to'])) { $errors[] = '<span class="error">Reply To is required</span>'; } else if (filter_var($_POST['reply_to'], FILTER_VALIDATE_EMAIL) === false) { $errors[] = '<span class="error">Please enter a valid Reply-To address</span>'; } } if (empty($errors) === true) { $query = $this->db->prepare("INSERT INTO report(report_name, to_address, subject, reply_to) VALUES (?, ?, ?, ?) "); $query->bindValue(1, $report_name); $query->bindValue(2, $to_address); $query->bindValue(3, $subject); $query->bindValue(4, $reply_to); try { $query->execute(); } catch(PDOException $e) { die($e->getMessage()); } header('Location:home.php?success'); exit(); } } if (isset($_GET['success']) && empty($_GET['success'])) { header('Location:home.php'); echo '<span class="error">Report is succesfully created</span>'; } ?> <form action="" method="POST" accept-charset="UTF-8"> <div style="font-weight:bold;padding:17px 80px;text-decoration:underline;">Section A</div> <table class="create_report"> <tr><td><label>Report Name</label><span style="color:#A60000">*</span></td> <td><input type="text" name="report_name" required placeholder="Name of the report" value="<?php if(isset($_POST["report_name"])) echo $report_name; ?>" size="30" maxlength="30"> </td></tr> <tr><td><label>to</label><span style="color:#A60000">*</span></td> <td><input type="text" name="to_address" required placeholder="to address" value="<?php if(isset($_POST["to_address"])) echo $to_address; ?>" size="30"> </td></tr> <tr><td><label>Subject</label><span style="color:#A60000">*</span></td> <td><input type="text" name="subject" required placeholder="Subject" value="<?php if(isset($_POST["subject"])) echo $subject; ?>" size="30"> </td></tr> <tr><td><label>Reply To</label><span style="color:#A60000">*</span></td> <td><input type="text" name="reply_to" required placeholder="Reply address" value="<?php if(isset($_POST["reply_to"])) echo $reply_to; ?>" size="30"> </td></tr> <tr><td><input type="submit" value="create report" style="background:#8AC007;color:#080808;padding:6px;" name="create_rp_report"></td></tr> </table> </form> <?php //IF THERE ARE ERRORS, THEY WOULD BE DISPLAY HERE if (empty($errors) === false) { echo '<div>' . implode('</p><p>', $errors) . '</div>'; } } }//Report CLASS ENDS

    Read the article

  • How do I use C# and ADO.NET to query an Oracle table with a spatial column of type SDO_GEOMETRY?

    - by John Donahue
    My development machine is running Windows 7 Enterprise, 64-bit version. I am using Visual Studio 2010 Release Candidate. I am connecting to an Oracle 11g Enterprise server version 11.1.0.7.0. I had a difficult time locating Oracle client software that is made for 64-bit Windows systems and eventually landed here to download what I assume is the proper client connectivity software. I added a reference to "Oracle.DataAccess" which is version 2.111.6.0 (Runtime Version is v2.0.50727). I am targeting .NET CLR version 4.0 since all properties of my VS Solution are defaults and this is 2010 RC. I was then able to write a console application in C# that established connectivity, executed a SELECT statement, and properly returned data when the table in question does NOT contain a spatial column. My problem is that this no longer works when the table I query has a column of type SDO_GEOMETRY in it. Below is the simple console application I am trying to run that reproduces the problem. When the code gets to the line with the "ExecuteReader" command, an exception is raised and the message is "Unsupported column datatype". using System; using System.Data; using Oracle.DataAccess.Client; namespace ConsoleTestOracle { class Program { static void Main(string[] args) { string oradb = string.Format("Data Source={0};User Id={1};Password={2};", "hostname/servicename", "login", "password"); try { using (OracleConnection conn = new OracleConnection(oradb)) { conn.Open(); OracleCommand cmd = new OracleCommand(); cmd.Connection = conn; cmd.CommandText = "select * from SDO_8307_2D_POINTS"; cmd.CommandType = CommandType.Text; OracleDataReader dr = cmd.ExecuteReader(); } } catch (Exception e) { string error = e.Message; } } } } The fact that this code works when used against a table that does not contain a spatial column of type SDO_GEOMETRY makes me think I have my windows 7 machine properly configured so I am surprised that I get this exception when the table contains different kinds of columns. I don't know if there is some configuration on my machine or the Oracle machine that needs to be done, or if the Oracle client software I have installed is wrong, or old and needs to be updated. Here is the SQL I used to create the table, populate it with some rows containing points in the spatial column, etc. if you want to try to reproduce this exactly. SQL Create Commands: create table SDO_8307_2D_Points (ObjectID number(38) not null unique, TestID number, shape SDO_GEOMETRY); Insert into SDO_8307_2D_Points values (1, 1, SDO_GEOMETRY(2001, 8307, null, SDO_ELEM_INFO_ARRAY(1, 1, 1), SDO_ORDINATE_ARRAY(10.0, 10.0))); Insert into SDO_8307_2D_Points values (2, 2, SDO_GEOMETRY(2001, 8307, null, SDO_ELEM_INFO_ARRAY(1, 1, 1), SDO_ORDINATE_ARRAY(10.0, 20.0))); insert into user_sdo_geom_metadata values ('SDO_8307_2D_Points', 'SHAPE', SDO_DIM_ARRAY(SDO_DIM_ELEMENT('Lat', -180, 180, 0.05), SDO_DIM_ELEMENT('Long', -90, 90, 0.05)), 8307); create index SDO_8307_2D_Point_indx on SDO_8307_2D_Points(shape) indextype is mdsys.spatial_index PARAMETERS ('sdo_indx_dims=2' ); Any advice or insights would be greatly appreciated. Thank you.

    Read the article

  • Why is my PHP query executing twice on page load?

    - by user1826238
    I am newish to PHP and I seem to be having an issue with an insert statement that executes twice when I open this page to view a document. In the database the 2nd insert is 1 second later. It happens in google chrome only and on this page only. IE has no issue, I dont have firefox to check. view_document.php <?php require_once($_SERVER['DOCUMENT_ROOT'] . '/../includes/core.php'); require_once($_SERVER['DOCUMENT_ROOT'] . '/../includes/connect.php'); $webusername = $_SESSION['webname']; if (isset($_GET['document'])) { $ainumber = (int) $_GET['document']; if (!ctype_digit($_GET['document']) || !preg_match('~^[0-9]+$~',$_GET['document']) || !is_numeric($_GET['document'])) { $_SESSION = array(); session_destroy(); header('Location: login.php'); } else { $stmt = $connect->prepare("SELECT s_filename, s_reference FROM dmsmain WHERE s_ainumber = ?") or die(mysqli_error()); $stmt->bind_param('s', $ainumber); $stmt->execute(); $stmt->bind_result($filename, $reference); $stmt->fetch(); $stmt->close(); $file = $_SERVER['DOCUMENT_ROOT'] . '/../dms/files/'.$filename.'.pdf'; if (file_exists($file)) { header('Content-Type: application/pdf'); header('Content-Disposition: inline; filename='.basename($file)); header('Content-Transfer-Encoding: binary'); header('Content-Length: ' . filesize($file)); header('Accept-Ranges: bytes'); readfile($file); $stmt = $connect->prepare("INSERT INTO dmslog (s_reference, s_userid, s_lastactivity, s_actiontype) VALUES (?, ?, ?, ?)") or die(mysqli_error()); date_default_timezone_set('Africa/Johannesburg'); $date = date('Y-m-d H:i:s'); $actiontype = 'DL'; $stmt->bind_param('ssss', $reference, $webusername, $date, $actiontype); $stmt->execute(); $stmt->close(); } else { $missing = "<b>File not found</b>"; } } } ?> My HTTP access records I assume [15/Nov/2012:10:14:32 +0200] "POST /dms/search.php HTTP/1.1" 200 5783 "http://www.denso.co.za/dms/search.php" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11" [15/Nov/2012:10:14:33 +0200] "GET /favicon.ico HTTP/1.1" 404 - "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11" [15/Nov/2012:10:14:34 +0200] "GET /dms/view_document.php?document=8 HTTP/1.1" 200 2965 "http://www.denso.co.za/dms/search.php" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11" [15/Nov/2012:10:14:35 +0200] "GET /favicon.ico HTTP/1.1" 404 - "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11" I have checked my <img src=''> links and I dont see a problem with them. The records indictate there is a favicon.ico request so I created a blank favicon and placed it in my public_html folder and linked it in the page like so <link href="../favicon.ico" rel="shortcut icon" type="image/x-icon" /> Unfortunately that did not work as the statement still executes twice. I am unsure if it is a favicon issue as my upload page uses an insert query and it executes once. If someone could please tell me where I am going wrong or point me in the right direction I would be very grateful

    Read the article

  • How to query on table returned by Stored procedure within a procedure.

    - by Shantanu Gupta
    I have a stored procedure that is performing some ddl dml operations. It retrieves a data after processing data from CTE and cross apply and other such complex things. Now this returns me a 4 tables which gets binded to various sources at frontend. Now I want to use one of the table to further processing so as to get more usefull information from it. eg. This table would be containing approx 2000 records at most of which i want to get records that belongs to lodging only. PK_CATEGORY_ID DESCRIPTION FK_CATEGORY_ID IMMEDIATE_PARENT Department_ID Department_Name DESCRIPTION_HIERARCHY DEPTH IS_ACTIVE ID_PATH DESC_PATH -------------------- -------------------------------------------------- -------------------- -------------------------------------------------- -------------------- -------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------- ----------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1 Food NULL NULL 1 Food (Food) Food 0 1 0 Food 5 Chinese 1 Food 1 Food (Food) ----Chinese 1 1 1 Food->Chinese 14 X 5 Chinese 1 Food (Food) --------X 2 1 1->5 Food->Chinese->X 15 Y 5 Chinese 1 Food (Food) --------Y 2 1 1->5 Food->Chinese->Y 65 asdasd 5 Chinese 1 Food (Food) --------asdasd 2 1 1->5 Food->Chinese->asdasd 66 asdas 5 Chinese 1 Food (Food) --------asdas 2 1 1->5 Food->Chinese->asdas 8 Italian 1 Food 1 Food (Food) ----Italian 1 1 1 Food->Italian 48 hfghfgh 1 Food 1 Food (Food) ----hfghfgh 1 1 1 Food->hfghfgh 55 Asd 1 Food 1 Food (Food) ----Asd 1 1 1 Food->Asd 2 Lodging NULL NULL 2 Lodging (Lodging) Lodging 0 1 0 Lodging 3 Room 2 Lodging 2 Lodging (Lodging) ----Room 1 1 2 Lodging->Room 4 Floor 3 Room 2 Lodging (Lodging) --------Floor 2 1 2->3 Lodging->Room->Floor 9 First 4 Floor 2 Lodging (Lodging) ------------First 3 1 2->3->4 Lodging->Room->Floor->First 10 Second 4 Floor 2 Lodging (Lodging) ------------Second 3 1 2->3->4 Lodging->Room->Floor->Second 11 Third 4 Floor 2 Lodging (Lodging) ------------Third 3 1 2->3->4 Lodging->Room->Floor->Third 29 Fourth 4 Floor 2 Lodging (Lodging) ------------Fourth 3 1 2->3->4 Lodging->Room->Floor->Fourth 12 Air Conditioned 3 Room 2 Lodging (Lodging) --------Air Conditioned 2 1 2->3 Lodging->Room->Air Conditioned 20 With Balcony 12 Air Conditioned 2 Lodging (Lodging) ------------With Balcony 3 1 2->3->12 Lodging->Room->Air Conditioned->With Balcony 24 Mountain View 20 With Balcony 2 Lodging (Lodging) ----------------Mountain View 4 1 2->3->12->20 Lodging->Room->Air Conditioned->With Balcony->Mountain View 25 Ocean View 20 With Balcony 2 Lodging (Lodging) ----------------Ocean View 4 1 2->3->12->20 Lodging->Room->Air Conditioned->With Balcony->Ocean View 26 Garden View 20 With Balcony 2 Lodging (Lodging) ----------------Garden View 4 1 2->3->12->20 Lodging->Room->Air Conditioned->With Balcony->Garden View 52 Smoking 20 With Balcony 2 Lodging (Lodging) ----------------Smoking 4 1 2->3->12->20 Lodging->Room->Air Conditioned->With Balcony->Smoking 21 Without Balcony 12 Air Conditioned 2 Lodging (Lodging) ------------Without Balcony 3 1 2->3->12 Lodging->Room->Air Conditioned->Without Balcony 13 Non Air Conditioned 3 Room 2 Lodging (Lodging) --------Non Air Conditioned 2 1 2->3 Lodging->Room->Non Air Conditioned 22 With Balcony 13 Non Air Conditioned 2 Lodging (Lodging) ------------With Balcony 3 1 2->3->13 Lodging->Room->Non Air Conditioned->With Balcony 71 EA 3 Room 2 Lodging (Lodging) --------EA 2 1 2->3 Lodging->Room->EA 50 Casabellas 2 Lodging 2 Lodging (Lodging) ----Casabellas 1 1 2 Lodging->Casabellas 51 North Beach 50 Casabellas 2 Lodging (Lodging) --------North Beach 2 1 2->50 Lodging->Casabellas->North Beach 40 Fooding NULL NULL 40 Fooding (Fooding) Fooding 0 1 0 Fooding 41 Pizza 40 Fooding 40 Fooding (Fooding) ----Pizza 1 1 40 Fooding->Pizza 45 Onion 41 Pizza 40 Fooding (Fooding) --------Onion 2 1 40->41 Fooding->Pizza->Onion 47 Extra Cheeze 41 Pizza 40 Fooding (Fooding) --------Extra Cheeze 2 1 40->41 Fooding->Pizza->Extra Cheeze 77 Burger 40 Fooding 40 Fooding (Fooding) ----Burger 1 1 40 Fooding->Burger This result is being obtained to me using some stored procedure which contains some DML operations as well. i want something like this select description from exec spName where fk_category_id=5 Remember that this spName is returning me 4 tables of which i want to perform some query on one of the table whose index will be known to me. I dont have to send it to UI before querying further. I am using Sql Server 2008 but would like a compatible solution for 2005 also.

    Read the article

  • Mysql - Help me alter this search query involving multiple joins and conditions to get the desired r

    - by sandeepan-nath
    About the system - We are following tags based search. Tutors create packs - tag relations for tutors stored in tutors_tag_relations and those for packs stored in learning_packs_tag_relations. All tags are stored in tags table. The system has 6 tables - tutors, Users (linked to tutor_details), learning_packs, learning_packs_tag_relations, tutors_tag_relations and tags Please run the following fresh queries to setup the system :- CREATE TABLE IF NOT EXISTS learning_packs_tag_relations ( id_tag int(10) unsigned NOT NULL DEFAULT '0', id_tutor int(10) DEFAULT NULL, id_lp int(10) unsigned DEFAULT NULL, KEY Learning_Packs_Tag_Relations_FKIndex1 (id_tag), KEY id_lp (id_lp), KEY id_tag (id_tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; CREATE TABLE IF NOT EXISTS learning_packs ( id_lp int(10) unsigned NOT NULL AUTO_INCREMENT, id_status int(10) unsigned NOT NULL DEFAULT '2', id_author int(10) unsigned NOT NULL DEFAULT '0', name varchar(255) NOT NULL DEFAULT '', PRIMARY KEY (id_lp) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=21 ; CREATE TABLE IF NOT EXISTS tutors_tag_relations ( id_tag int(10) unsigned NOT NULL DEFAULT '0', id_tutor int(10) DEFAULT NULL, KEY Tutors_Tag_Relations (id_tag), KEY id_tutor (id_tutor), KEY id_tag (id_tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; CREATE TABLE IF NOT EXISTS users ( id_user int(10) unsigned NOT NULL AUTO_INCREMENT, name varchar(100) NOT NULL DEFAULT '', surname varchar(155) NOT NULL DEFAULT '', PRIMARY KEY (id_user) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=52 ; CREATE TABLE IF NOT EXISTS tutor_details ( id_tutor int(10) NOT NULL AUTO_INCREMENT, id_user int(10) NOT NULL, PRIMARY KEY (id_tutor) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=60 ; CREATE TABLE IF NOT EXISTS tags ( id_tag int(10) unsigned NOT NULL AUTO_INCREMENT, tag varchar(255) DEFAULT NULL, PRIMARY KEY (id_tag), UNIQUE KEY tag (tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ; ALTER TABLE learning_packs_tag_relations ADD CONSTRAINT Learning_Packs_Tag_Relations_ibfk_1 FOREIGN KEY (id_tag) REFERENCES tags (id_tag) ON DELETE NO ACTION ON UPDATE NO ACTION; ALTER TABLE learning_packs ADD CONSTRAINT Learning_Packs_ibfk_2 FOREIGN KEY (id_author) REFERENCES users (id_user) ON DELETE NO ACTION ON UPDATE NO ACTION; ALTER TABLE tutors_tag_relations ADD CONSTRAINT Tutors_Tag_Relations_ibfk_1 FOREIGN KEY (id_tag) REFERENCES tags (id_tag) ON DELETE NO ACTION ON UPDATE NO ACTION; INSERT INTO test.users ( id_user , name , surname ) VALUES ( NULL , 'Vivian', 'Richards' ), ( NULL , 'Sachin', 'Tendulkar' ); INSERT INTO test.users ( id_user , name , surname ) VALUES ( NULL , 'Don', 'Bradman' ); INSERT INTO test.tutor_details ( id_tutor , id_user ) VALUES ( NULL , '52' ), ( NULL , '53' ); INSERT INTO test.tutor_details ( id_tutor , id_user ) VALUES ( NULL , '54' ); INSERT INTO test.tags ( id_tag , tag ) VALUES ( 1 , 'Vivian' ), ( 2 , 'Richards' ); INSERT INTO test.tags (id_tag, tag) VALUES (3, 'Sachin'), (4, 'Tendulkar'); INSERT INTO test.tags (id_tag, tag) VALUES (5, 'Don'), (6, 'Bradman'); INSERT INTO test.learning_packs (id_lp, id_status, id_author, name) VALUES ('1', '1', '52', 'Cricket 1'), ('2', '2', '52', 'Cricket 2'); INSERT INTO test.tags (id_tag, tag) VALUES ('7', 'Cricket'), ('8', '1'); INSERT INTO test.tags (id_tag, tag) VALUES ('9', '2'); INSERT INTO test.learning_packs_tag_relations (id_tag, id_tutor, id_lp) VALUES ('7', '52', '1'), ('8', '52', '1'); INSERT INTO test.learning_packs_tag_relations (id_tag, id_tutor, id_lp) VALUES ('7', '52', '2'), ('9', '52', '2'); =================================================================================== Requirement Now I want to search learning_packs, with the same AND logic. Help me modify the following query so that searching pack name or tutor's name, surname results all active packs (either directly those packs or packs created by those tutors). ================================================================================== select lp.* from Learning_Packs AS lp LEFT JOIN Learning_Packs_Tag_Relations AS lptagrels ON lp.id_lp = lptagrels.id_lp LEFT JOIN Tutors_Tag_Relations as ttagrels ON lp.id_author = ttagrels.id_tutor LEFT JOIN Tutor_Details AS td ON ttagrels.id_tutor = td.id_tutor LEFT JOIN Users as u on td.id_user = u.id_user JOIN Tags as t on (t.id_tag = lptagrels.id_tag) or (t.id_tag = ttagrels.id_tag) where lp.id_status = 1 AND ( t.tag LIKE "%Vivian%" OR t.tag LIKE "%Richards%" ) group by lp.id_lp HAVING count(lp.id_lp) 1 limit 0,20 As you can see, searching "Cricket 1" returns that pack but searching Vivian Richards does not return the same pack. Please help

    Read the article

  • Issue in Creating an Insert Query See Description Below...

    - by Parth
    I am creating a Insert Query using PHP.. By fetching the data from a Audit table and iterating the values of it in loops.. table from which I am fetching the value has the snapshot below: The Code I am using to create is given below: mysql_select_db('information_schema'); $select = mysql_query("SELECT TABLE_NAME FROM TABLES WHERE TABLE_SCHEMA = 'pranav_test'"); $selectclumn = mysql_query("SELECT * FROM COLUMNS WHERE TABLE_SCHEMA = 'pranav_test'"); mysql_select_db('pranav_test'); $seletaudit = mysql_query("SELECT * FROM jos_audittrail WHERE live = 0"); $tables = array(); $i = 0; while($row = mysql_fetch_array($select)) { $tables[$i++] =$row['TABLE_NAME']; } while($row2 = mysql_fetch_array($seletaudit)) { $audit[] =$row2; } foreach($audit as $val) { if($val['operation'] == "INSERT") { if(in_array($val['table_name'],$tables)) { $insert = "INSERT INTO '".$val['table_name']."' ("; $selfld = mysql_query("SELECT field FROM jos_audittrail WHERE table_name = '".$val['table_name']."' AND operation = 'INSERT' AND trackid = '".$val['trackid']."'"); while($row3 = mysql_fetch_array($selfld)) { $values[] = $row3; } foreach($values as $field) { $insert .= "'".$field['field']."', "; } $insert .= "]"; $insert = str_replace(", ]",")",$insert); $insert .= " values ("; $selval = mysql_query("SELECT newvalue FROM jos_audittrail WHERE table_name = '".$val['table_name']."' AND operation = 'INSERT' AND trackid = '".$val['trackid']."' AND live = 0"); while($row4 = mysql_fetch_array($selval)) { $value[] = $row4; } /*echo "<pre>"; print_r($value);exit;*/ foreach($value as $data) { $insert .= "'".$data['newvalue']."', "; } $insert .= "["; $insert = str_replace(", [",")",$insert); } } } When I Echo the $insert out of the most outer for loop (for auditrail) The values get printed as many times as the records are found for the outer for loop..i.e 'orderby= show_noauth= show_title= link_titles= show_intro= show_section= link_section= show_category= link_category= show_author= show_create_date= show_modify_date= show_item_navigation= show_readmore= show_vote= show_icons= show_pdf_icon= show_print_icon= show_email_icon= show_hits= feed_summary= page_title= show_page_title=1 pageclass_sfx= menu_image=-1 secure=0 ', '0000-00-00 00:00:00', '13', '20', '1', '152', 'accmenu', 'IPL', 'ipl', 'index.php?option=com_content&view=archive', 'component' gets repeated , i.e. INSERT INTO 'jos_menu' ('params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type') values ('orderby= show_noauth= show_title= link_titles= show_intro= show_section= link_section= show_category= link_category= show_author= show_create_date= show_modify_date= show_item_navigation= show_readmore= show_vote= show_icons= show_pdf_icon= show_print_icon= show_email_icon= show_hits= feed_summary= page_title= show_page_title=1 pageclass_sfx= menu_image=-1 secure=0 ', '0000-00-00 00:00:00', '13', '20', '1', '152', 'accmenu', 'IPL', 'ipl', 'index.php?option=com_content&view=archive', 'component', 'orderby= show_noauth= show_title= link_titles= show_intro= show_section= link_section= show_category= link_category= show_author= show_create_date= show_modify_date= show_item_navigation= show_readmore= show_vote= show_icons= show_pdf_icon= show_print_icon= show_email_icon= show_hits= feed_summary= page_title= show_page_title=1 pageclass_sfx= menu_image=-1 secure=0 ', '0000-00-00 00:00:00', '13', '20', '1', '152', 'accmenu', 'IPL', 'ipl', 'index.php?option=com_content&view=archive', 'component', 'orderby= show_noauth= .. .. .. .. and so on What I want is I should get these Values for once, I know there is mistake using the outer Forloop, but I m not getting the idea of rectifying it.. Please help... please poke me for more clarification...

    Read the article

  • Can I retrieve objects from a complex query that limits results to fields from a single table?

    - by Sean Redmond
    I have a model whose rows I always want to sort based on the values in another associated model and I was thinking that the way to implement this would be to use set_dataset in the model. This is causing query results to be returned as hashes rather than objects, though, so none of the methods from the class can be used when iterating over the dataset. I basically have two classes class SortFields < Sequel::Model(:sort_fields) set_primary_key :objectid end class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid end Some backstory: the data is imported from a legacy system into mysql. The values in sort_fields are calculated from multiple other associated tables (some one-to-many, some many-to-many) according to some complicated rules. The likely solution will be to just add the values in sort_fields to items (I want to keep the imported data separate from the calculated data, but I don't have to). First, though, I just want to understand how far you can go with a dataset and still get objects rather than hashes. If I set the dataset to sort on a field in items like so class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset(order(:sortnumber)) end then the expected clause is added to the generated SQL, e.g.: >> Items.limit(1).sql => "SELECT * FROM `items` ORDER BY `sortnumber` LIMIT 1" and queries still return objects: >> Items.limit(1).first.class => Items If I order it by the associated fields though... class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset( eager_graph(:sort_fields). order(:sort1, :sort2, :sort3) ) end ...I get hashes ?> Items.limit(1).first.class => Hash My first thought was that this happens because all fields from sort_fields are included in the results and maybe if selected only the fields from items I would get Items objects again: class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset( eager_graph(:sort_fields). select(:items.*). order(:sort1, :sort2, :sort3) ) end The generated SQL is what I would expect: >> Items.limit(1).sql => "SELECT `items`.* FROM `items` LEFT OUTER JOIN `sort_fields` ON (`sort_fields`.`objectid` = `items`.`objectid`) ORDER BY `sort1`, `sort2`, `sort3` LIMIT 1" It returns the same rows as the set_dataset(order(:sortnumber)) version but it still doesn't work: >> Items.limit(1).first.class => Hash Before I add the sort fields to the items table so that they can all live happily in the same model, is there a way to tell Sequel to return on object when it wants to return a hash?

    Read the article

  • SQL Profiler: Read/Write units

    - by Ian Boyd
    i've picked a query out of SQL Server Profiler that says it took 1,497 reads: EventClass: SQL:BatchCompleted TextData: SELECT Transactions.... CPU: 406 Reads: 1497 Writes: 0 Duration: 406 So i've taken this query into Query Analyzer, so i may try to reduce the number of reads. But when i turn on SET STATISTICS IO ON to see the IO activity for the query, i get nowhere close to one thousand reads: Table Scan Count Logical Reads =================== ========== ============= FintracTransactions 4 20 LCDs 2 4 LCTs 2 4 FintracTransacti... 0 0 Users 1 2 MALs 0 0 Patrons 0 0 Shifts 1 2 Cages 1 1 Windows 1 3 Logins 1 3 Sessions 1 6 Transactions 1 7 Which if i do my math right, there is a total of 51 reads; not 1,497. So i assume Reads in SQL Profiler is an arbitrary metric. Does anyone know the conversion of SQL Server Profiler Reads to IO Reads? See also SQL Profiler CPU / duration unit Query Analyzer VS. Query Profiler Reads, Writes, and Duration Discrepencies

    Read the article

  • Filtering by entity key name in Google App Engine on Python

    - by Bemmu
    On Google App Engine to query the data store with Python, one can use GQL or Entity.all() and then filter it. So for example these are equivalent gql = "SELECT * FROM User WHERE age >= 18" db.GqlQuery(gql) and query = User.all() query.filter("age >=", 18) Now, it's also possible to query things by key name. I know that in GQL you do it like this gql = "SELECT * FROM User WHERE __key__ >= Key('User', 'abc')" db.GqlQuery(gql) But how would you now use filter to do the same? query = User.all() query.filter("__key__ >=", ?????)

    Read the article

  • Using Static methods or none static methods in Dao Class ?

    - by dankyy1
    Hi I generate Dao classes for some DB operations in this manner making methods of Dao class as static or none static is better? Using sample dao class below ,If more than one client got to use the AddSampleItem method in same time?how this may result? public class SampleDao { static DataAcessor dataAcessor public static void AddSampleItem(object[] params) { dataAcessor =new DataAcessor(); //generate query here string query="..." dataAcessor.ExecuteQery(query); dataAcessor.Close(); } public static void UpdateSampleItem(object[] params) { dataAcessor =new DataAcessor(); //generate query here string query="..." dataAcessor.ExecuteQery(query); dataAcessor.Close(); } }

    Read the article

  • PHP - static DB class vs DB singleton object

    - by Marco Demaio
    I don't want to create a discussion about singleton better than static or better than global, etc. I read dozens of questions about it on SO, but I couldn't come up with an answer to this SPECIFIC question, so I hope someone could now illuminate me buy answering this question with one (or more) real simple EXAMPLES, and not theoretical discussions. In my app I have the typical DB class needed to perform tasks on DB without having to write everywhere in code mysql_connect/mysql_select_db/mysql... (moreover in future I might decide to use another type of DB engine in place of mySQL so obviously I need a class of abstration). I could write the class either as a static class: class DB { private static $connection = FALSE; //connection to be opened //DB connection values private static $server = NULL; private static $usr = NULL; private static $psw = NULL; private static $name = NULL; public static function init($db_server, $db_usr, $db_psw, $db_name) { //simply stores connections values, withour opening connection } public static function query($query_string) { //performs query over alerady opened connection, if not open, it opens connection 1st } ... } or as a Singletonm class: class DBSingleton { private $inst = NULL; private $connection = FALSE; //connection to be opened //DB connection values private $server = NULL; private $usr = NULL; private $psw = NULL; private $name = NULL; public static function getInstance($db_server, $db_usr, $db_psw, $db_name) { //simply stores connections values, withour opening connection if($inst === NULL) $this->inst = new DBSingleton(); return $this->inst; } private __construct()... public function query($query_string) { //performs query over already opened connection, if connection is not open, it opens connection 1st } ... } Then after in my app if I wanto to query the DB i could do //Performing query using static DB object DB:init(HOST, USR, PSW, DB_NAME); DB::query("SELECT..."); //Performing query using DB singleton $temp = DBSingleton::getInstance(HOST, USR, PSW, DB_NAME); $temp->query("SELECT..."); My simple brain sees Singleton has got the only advantage to avoid declaring as 'static' each method of the class. I'm sure some of you could give me an EXAMPLE of real advantage of singleton in this specific case. Thanks in advance.

    Read the article

  • Print the jena result set in html(servlet/jsp).

    - by Udayanga
    hi, I'm using servlet for manipulating ontology.I got the result of my SPARQL query and I want to display(print) that result in JSP(Servlet). Following code segment can be used to print the result in console. . . . com.hp.hpl.jena.query.Query query = QueryFactory.create(queryStr); QueryExecution qe = QueryExecutionFactory.create(query,model); com.hp.hpl.jena.query.ResultSet rs = qe.execSelect(); ResultSetFormatter.out(System.out, rs); any idea..? Thank in advance!

    Read the article

  • Compiled Linq Queries with Built-in SQL functions

    - by Brandi
    I have a query that I am executing in C# that is taking way too much time: string Query = "SELECT COUNT(HISTORYID) FROM HISTORY WHERE YEAR(CREATEDATE) = YEAR(GETDATE()) "; Query += "AND MONTH(CREATEDATE) = MONTH(GETDATE()) AND DAY(CREATEDATE) = DAY(GETDATE()) AND USERID = '" + EmployeeID + "' "; Query += "AND TYPE = '5'"; I then use SqlCommand Command = new SqlCommand(Query, Connection) and SqlDataReader Reader = Command.ExecuteReader() to read in the data. This is taking over a minute to execute from C#, but is much quicker in SSMS. I see from google searching you can do something with CompiledQuery, but I'm confused whether I can still use the built in SQL functions YEAR, MONTH, DAY, and GETDATE. If anyone can show me an example of how to create and call a compiled query using the built in functions, I will be very grateful! Thanks in advance.

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >