Search Results

Search found 26438 results on 1058 pages for 'calendar table'.

Page 496/1058 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • MySQL Select problems

    - by John Nuñez
    Table #1: qa_returns_items Table #2: qa_returns_residues I have a long time trying to get this Result: item_code - item_quantity 2 - 1 3 - 2 IF qa_returns_items.item_code = qa_returns_residues.item_code AND status_code = 11 THEN item_quantity = qa_returns_items.item_quantity - qa_returns_residues.item_quantity ELSEIF qa_returns_items.item_code = qa_returns_residues.item_code AND status_code = 12 THEN item_quantity = qa_returns_items.item_quantity + qa_returns_residues.item_quantity ELSE show diferendes END IF I tried this Query: select SubQueryAlias.item_code, total_ecuation, SubQueryAlias.item_unitprice, SubQueryAlias.item_unitprice * total_ecuation as item_subtotal, item_discount, (SubQueryAlias.item_unitprice * total_ecuation) - item_discount as item_total from ( select ri.item_code , case status_code when 11 then ri.item_quantity - rr.item_quantity when 12 then ri.item_quantity + rr.item_quantity end as total_ecuation , rr.item_unitprice , rr.item_quantity , rr.item_discount * rr.item_quantity as item_discount from qa_returns_residues rr left join qa_returns_items ri on ri.item_code = rr.item_code WHERE ri.returnlog_code = 1 ) as SubQueryAlias where total_ecuation > 0 GROUP BY (item_code); The query returns this result: item_code - item_quantity 1 - 2 2 - 2

    Read the article

  • operating systems - TLBs

    - by stabGreeol
    I'm trying to get my head round this (okay, tbh cramming a night before the exams :) but i can't figure out (nor find a good high level overview on the net) of this: 'page table entries can be mapped to more than one TLB entry.. if for example every page table entry is mappped to two TLB entries, this is know as 2-way set associative TLB' My question is, why would we want to map this more than once? surely we want to have the maximum number of possible entries represented in the TLB, and duplication would waste space right ? What am i missing? Many thanks

    Read the article

  • mappedBy reference an unknown target entity property - hibernate error

    - by tommy
    first, my classes: User package com.patpuc.model; import java.util.List; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.Id; import javax.persistence.OneToMany; import javax.persistence.Table; import com.patpuc.model.RolesMap; @Entity @Table(name = "users") public class User { @Id @Column(name = "USER_ID", unique = true, nullable = false) private int user_id; @Column(name = "NAME", nullable = false) private String name; @Column(name = "SURNAME", unique = true, nullable = false) private String surname; @Column(name = "USERNAME_U", unique = true, nullable = false) private String username_u; // zamiast username @Column(name = "PASSWORD", unique = true, nullable = false) private String password; @Column(name = "USER_DESCRIPTION", nullable = false) private String userDescription; @Column(name = "AUTHORITY", nullable = false) private String authority = "ROLE_USER"; @Column(name = "ENABLED", nullable = false) private int enabled; @OneToMany(mappedBy = "rUser") private List<RolesMap> rolesMap; public List<RolesMap> getRolesMap() { return rolesMap; } public void setRolesMap(List<RolesMap> rolesMap) { this.rolesMap = rolesMap; } /** * @return the user_id */ public int getUser_id() { return user_id; } /** * @param user_id * the user_id to set */ public void setUser_id(int user_id) { this.user_id = user_id; } /** * @return the name */ public String getName() { return name; } /** * @param name * the name to set */ public void setName(String name) { this.name = name; } /** * @return the surname */ public String getSurname() { return surname; } /** * @param surname * the surname to set */ public void setSurname(String surname) { this.surname = surname; } /** * @return the username_u */ public String getUsername_u() { return username_u; } /** * @param username_u * the username_u to set */ public void setUsername_u(String username_u) { this.username_u = username_u; } /** * @return the password */ public String getPassword() { return password; } /** * @param password * the password to set */ public void setPassword(String password) { this.password = password; } /** * @return the userDescription */ public String getUserDescription() { return userDescription; } /** * @param userDescription * the userDescription to set */ public void setUserDescription(String userDescription) { this.userDescription = userDescription; } /** * @return the authority */ public String getAuthority() { return authority; } /** * @param authority * the authority to set */ public void setAuthority(String authority) { this.authority = authority; } /** * @return the enabled */ public int getEnabled() { return enabled; } /** * @param enabled * the enabled to set */ public void setEnabled(int enabled) { this.enabled = enabled; } @Override public String toString() { StringBuffer strBuff = new StringBuffer(); strBuff.append("id : ").append(getUser_id()); strBuff.append(", name : ").append(getName()); strBuff.append(", surname : ").append(getSurname()); return strBuff.toString(); } } RolesMap.java package com.patpuc.model; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.ManyToOne; import javax.persistence.Table; import com.patpuc.model.User; @Entity @Table(name = "roles_map") public class RolesMap { private int rm_id; private String username_a; private String username_l; //private String username_u; private String password; private int role_id; @ManyToOne @JoinColumn(name="username_u", nullable=false) private User rUser; public RolesMap(){ } /** * @return the user */ public User getUser() { return rUser; } /** * @param user the user to set */ public void setUser(User rUser) { this.rUser = rUser; } @Id @Column(name = "RM_ID", unique = true, nullable = false) public int getRmId() { return rm_id; } public void setRmId(int rm_id) { this.rm_id = rm_id; } @Column(name = "USERNAME_A", unique = true) public String getUsernameA() { return username_a; } public void setUsernameA(String username_a) { this.username_a = username_a; } @Column(name = "USERNAME_L", unique = true) public String getUsernameL() { return username_l; } public void setUsernameL(String username_l) { this.username_l = username_l; } @Column(name = "PASSWORD", unique = true, nullable = false) public String getPassword() { return password; } public void setPassword(String password) { this.password = password; } @Column(name = "ROLE_ID", unique = true, nullable = false) public int getRoleId() { return role_id; } public void setRoleId(int role_id) { this.role_id = role_id; } } when i try run this on server i have exception like this: Error creating bean with name 'SessionFactory' defined in ServletContext resource [/WEB-INF/classes/baseBeans.xml]: Invocation of init method failed; nested exception is org.hibernate.AnnotationException: mappedBy reference an unknown target entity property: com.patpuc.model.RolesMap.users in com.patpuc.model.User.rolesMap But i don't exaclu know what i'm doing wrong. Can somebody help me fix this problem?

    Read the article

  • Efficiency of checking for null varbinary(max) column ?

    - by Moe Sisko
    Using SQL Server 2008. Example table : CREATE table dbo.blobtest (id int primary key not null, name nvarchar(200) not null, data varbinary(max) null) Example query : select id, name, cast((case when data is null then 0 else 1 end) as bit) as DataExists from dbo.blobtest Now, the query needs to return a "DataExists" column, that returns 0 if the blob is null, else 1. This all works fine, but I'm wondering how efficient it is. i.e. does SQL server need to read in the whole blob to its memory, or is there some optimization so that it just does enough reads to figure out if the blob is null or not ? (FWIW, the sp_tableoption "large value types out of row" option is set to OFF for this example).

    Read the article

  • sql server 2005 indexes and low cardinality

    - by Peanut
    How does SQL Server determine whether a table column has low cardinality? The reason I ask is because query optimizer would most probably not use an index on a gender column (values 'm' and 'f'). However how would it determine the cardinality of the gender column to come to that decision? On top of this, if in the unlikely event that I had a million entries in my table and only one entry in the gender column was 'm', would SQL server be able to determine this and use the index to retrieve that single row? Or would it just know there are only 2 distinct values in the column and not use the index? I appreciate the above discusses some poor db design, but I'm just trying to understand how query optimizer comes to its decisions. Many thanks.

    Read the article

  • derby + hibernate ConstraintViolationException using manytomany relationships

    - by user364470
    Hi, I'm new to Hibernate+Derby... I've seen this issue mentioned throughout the google, but have not seen a proper resolution. This following code works fine with mysql, but when I try this on derby i get exceptions: ( each Tag has two sets of files and vise-versa - manytomany) Tags.java @Entity @Table(name="TAGS") public class Tags implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) public long getId() { return id; } @ManyToMany(targetEntity=Files.class ) @ForeignKey(name="USER_TAGS_FILES",inverseName="USER_FILES_TAGS") @JoinTable(name="USERTAGS_FILES", joinColumns=@JoinColumn(name="TAGS_ID"), inverseJoinColumns=@JoinColumn(name="FILES_ID")) public Set<data.Files> getUserFiles() { return userFiles; } @ManyToMany(mappedBy="autoTags", targetEntity=data.Files.class) public Set<data.Files> getAutoFiles() { return autoFiles; } Files.java @Entity @Table(name="FILES") public class Files implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) public long getId() { return id; } @ManyToMany(mappedBy="userFiles", targetEntity=data.Tags.class) public Set getUserTags() { return userTags; } @ManyToMany(targetEntity=Tags.class ) @ForeignKey(name="AUTO_FILES_TAGS",inverseName="AUTO_TAGS_FILES") @JoinTable(name="AUTOTAGS_FILES", joinColumns=@JoinColumn(name="FILES_ID"), inverseJoinColumns=@JoinColumn(name="TAGS_ID")) public Set getAutoTags() { return autoTags; } I add some data to the DB, but when running over Derby these exception turn up (the don't using mysql) Exceptions SEVERE: DELETE on table 'FILES' caused a violation of foreign key constraint 'USER_FILES_TAGS' for key (3). The statement has been rolled back. Jun 10, 2010 9:49:52 AM org.hibernate.event.def.AbstractFlushingEventListener performExecutions SEVERE: Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException: could not delete: [data.Files#3] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:96) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2712) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2895) at org.hibernate.action.EntityDeleteAction.execute(EntityDeleteAction.java:97) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:268) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:260) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:184) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:613) at org.hibernate.context.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:344) at $Proxy13.flush(Unknown Source) at data.HibernateORM.removeFile(HibernateORM.java:285) at data.DataImp.removeFile(DataImp.java:195) at booting.DemoBootForTestUntilTestClassesExist.main(DemoBootForTestUntilTestClassesExist.java:62) I have never used derby before so maybe there is something crutal that i'm missing 1) what am I doing wrong? 2) is there any way of cascading properly when I have 2 many-to-many relationships between two classes? Thanks!

    Read the article

  • MYSQL query to get all entries with specific time, from PHP?

    - by meds
    I'm trying to query a mysql table which places its date in the following format: yyyy-mm-dd hh:mm:ss So it's date and time, and that's all in a single field. Now from php I want to get the time and query the table to only return entries where the date field is less than 24 hours old. I'm having issues with the system because PHPs get time seems to return the values seperately and I'm struggling to figure out how to make it work with mysql queries. This seems fairly simple but I'm quite new to php so sorry if I'm completely missing something..

    Read the article

  • SSRS: Report label position dynamic

    - by Nauman
    I have a report which displays customer address in multiple labels. My customers use windowed envelopes for mailing. I need the address labels position to be configurable. Something like, I'll have a database table which stores the Top/Left position of each label per customer. Based on this table, I need to position the address labels on my report. I thought, it is doable by expressions, but Location property doesn't provides ability to set an expression and make the label's top and left dynamic. Anybody, any ideas, on how to achieve this?

    Read the article

  • Why doesn't this (translated) VB.NET code work?

    - by ropstah
    I had a piece of C# code converted, but the translated code isn't valid... Can somebody help out? C# <table> <% Html.Repeater<Hobby>("Hobbies", "row", "row-alt", (hobby, css) => { %> <tr class="<%= css %>"> <td><%= hobby.Title%></td> </tr> <% }); %> </table> VB <% Html.Repeater(of Jrc3.BLL.Product)(Model.ProductCollectionByPrcAutoKey, "row", "row-alt", Function(product, css) Do %> <tr class="<%= css %>"> <td><%= hobby.Title%></td> </tr> <% End Function)%>

    Read the article

  • MySQL: Transactions across multiple threads

    - by Zombies
    Preliminary: I have an application which maintains a thread pool of about 100 threads. Each thread can last about 1-30 seconds before a new task replaces it. When a thread end, that thread almost always will result in inserting 1-3 records into a table, this table is used by all of the threads. Right now, no transactional support exists, but I am trying to add that now. So... Goal I want to implement a transaction for this. The rules for whether or not this transaction commits or rollback reside in the main thread. Basically there is a simple function that will return a boolean. Can I implement a transaction across multiple connections? If not, can multiple threads share the same connection? (Note: there are a LOT of inserts going on here, and that is a requirement).

    Read the article

  • Why might SQL execute more quickly on SQL Server 2000 when NOT using a stored procedure?

    - by Kofi Sarfo
    I could see nothing wrong with the execution plan. Besides, as I understand it, SQL Server 2000 extended many of the performance benefits of stored procedures to all SQL statements by recognising new T-SQL statements against T-SQL statements of existing execution plans (by retaining execution plans for all SQL statements in the procedure cache, not just stored procedure execution plans) It's a fairly straight forward SELECT statement with sensible table joins, no transactions included or linked servers being referenced within the query and WITH (NOLOCK) table hints applied. The stored procedure was created by dbo and the user has all the necessary permissions. So my question is this: What are the likely reasons for a query to take only a few seconds to run but then take several minutes when identical T-SQL is run via a stored procedure?

    Read the article

  • Stop duplicate icmp echo replies when bridging to a dummy interface?

    - by mbrownnyc
    I recently configured a bridge br0 with members as eth0 (real if) and dummy0 (dummy.ko if). When I ping this machine, I receive duplicate replies as: # ping SERVERA PING SERVERA.domain.local (192.168.100.115) 56(84) bytes of data. 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=1 ttl=62 time=113 ms 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=1 ttl=62 time=114 ms (DUP!) 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=2 ttl=62 time=113 ms 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=2 ttl=62 time=113 ms (DUP!) Using tcpdump on SERVERA, I was able to see icmp echo replies being sent from eth0 and br0 itself as follows (oddly two echo request packets arrive "from" my Windows box myhost): 23:19:05.324192 IP myhost.domain.local > SERVERA.domain.local: ICMP echo request, id 512, seq 43781, length 40 23:19:05.324212 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324217 IP myhost.domain.local > SERVERA.domain.local: ICMP echo request, id 512, seq 43781, length 40 23:19:05.324221 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324264 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324272 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 It's worth noting, testing reveals that hosts on the same physical switch do not see DUP icmp echo responses (a host on the same VLAN on another switch does see a dup icmp echo response). I've read that this could be due to the ARP table of a switch, but I can't find any info directly related to bridges, just bonds. I have a feeling my problem lay in the stack on linux, not the switch, but am opened to any suggestions. The system is running centos6/el6 kernel 2.6.32-71.29.1.el6.i686. How do I stop ICMP echo replies from being sent in duplicate when dealing with a bridge interface/bridged interfaces? Thanks, Matt [edit] Quick note: It was recommended in #linux to: [08:53] == mbrownnyc [gateway/web/freenode/] has joined ##linux [08:57] <lkeijser> mbrownnyc: what happens if you set arp_ignore to 1 for the dummy interface? [08:59] <lkeijser> also set arp_announce to 2 for that interface [09:24] <mbrownnyc> lkeijser: I set arp_annouce to 2, arp_ignore to 2 in /etc/sysctl.conf and rebooted the machine... verifying that the bits are set after boot... the problem is still present I did this and came up empty. Same dup problem. I will be moving away from including the dummy interface in the bridge as: [09:31] == mbrownnyc [gateway/web/freenode/] has joined #Netfilter [09:31] <mbrownnyc> Hello all... I'm wondering, is it correct that even with an interface in PROMISC that the kernel will drop /some/ packets before they reach applications? [09:31] <whaffle> What would you make think so? [09:32] <mbrownnyc> I ask because I am receiving ICMP echo replies after configuring a bridge with a dummy interface in order for ipt_netflow to see all packets, only as reported in it's documentation: http://ipt-netflow.git.sourceforge.net/git/gitweb.cgi?p=ipt-netflow/ipt-netflow;a=blob;f=README.promisc [09:32] <mbrownnyc> but I do not know if PROMISC will do the same job [09:33] <mbrownnyc> I was referred here from #linux. any assistance is appreciated [09:33] <whaffle> The following conditions need to be met: PROMISC is enabled (bridges and applications like tcpdump will do this automatically, otherwise they won't function). [09:34] <whaffle> If an interface is part of a bridge, then all packets that enter the bridge should already be visible in the raw table. [09:35] <mbrownnyc> thanks whaffle PROMISC must be set manually for ipt_netflow to function, but [09:36] <whaffle> promisc does not need to be set manually, because the bridge will do it for you. [09:36] <whaffle> When you do not have a bridge, you can easily create one, thereby rendering any kernel patches moot. [09:36] <mbrownnyc> whaffle: I speak without the bridge [09:36] <whaffle> It is perfectly valid to have a "half-bridge" with only a single interface in it. [09:36] <mbrownnyc> whaffle: I am unfamiliar with the raw table, does this mean that PROMISC allows the raw table to be populated with packets the same as if the interface was part of a bridge? [09:37] <whaffle> Promisc mode will cause packets with {a dst MAC address that does not equal the interface's MAC address} to be delivered from the NIC into the kernel nevertheless. [09:37] <mbrownnyc> whaffle: I suppose I mean to clearly ask: what benefit would creating a bridge have over setting an interface PROMISC? [09:38] <mbrownnyc> whaffle: from your last answer I feel that the answer to my question is "none," is this correct? [09:39] <whaffle> Furthermore, the linux kernel itself has a check for {packets with a non-local MAC address}, so that packets that will not enter a bridge will be discarded as well, even in the face of PROMISC. [09:46] <mbrownnyc> whaffle: so, this last bit of information is quite clearly why I would need and want a bridge in my situation [09:46] <mbrownnyc> okay, the ICMP echo reply duplicate issue is likely out of the realm of this channel, but I sincerely appreciate the info on the kernels inner-workings [09:52] <whaffle> mbrownnyc: either the kernel patch, or a bridge with an interface. Since the latter is quicker, yes [09:54] <mbrownnyc> thanks whaffle [edit2] After removing the bridge, and removing the dummy kernel module, I only had a single interface chilling out, lonely. I still received duplicate icmp echo replies... in fact I received a random amount: http://pastebin.com/2LNs0GM8 The same thing doesn't happen on a few other hosts on the same switch, so it has to do with the linux box itself. I'll likely end up rebuilding it next week. Then... you know... this same thing will occur again. [edit3] Guess what? I rebuilt the box, and I'm still receiving duplicate ICMP echo replies. Must be the network infrastructure, although the ARP tables do not contain multiple entries. [edit4] How ridiculous. The machine was a network probe, so I was (ingress and egress) mirroring an uplink port to a node that was the NIC. So, the flow (must have) gone like this: ICMP echo request comes in through the mirrored uplink port. (the real) ICMP echo request is received by the NIC (the mirrored) ICMP echo request is received by the NIC ICMP echo reply is sent for both. I'm ashamed of myself, but now I know. It was suggested on #networking to either isolate the mirrored traffic to an interface that does not have IP enabled, or tag the mirrored packets with dot1q.

    Read the article

  • SQL Like question

    - by mike
    Is there a way to reverse the SQL Like operator so it searches a field backwards? For example, I have a value in a field that looks like this "Xbox 360 Video Game". If I write a query like below, it returns the result fine. SELECT id FROM table WHERE title like "%Xbox%Game%" However, when I search like this, it doesn't find any results. SELECT id FROM table WHERE title like "%Video%Xbox%" I need it to match in any direction. How can I get around this?

    Read the article

  • How does void QTableWidget::setItemPrototype ( const QTableWidgetItem * item ) clones objects?

    - by chappar
    QTableWidget::setItemPrototype says following. "The table widget will use the item prototype clone function when it needs to create a new table item. For example when the user is editing in an empty cell. This is useful when you have a QTableWidgetItem subclass and want to make sure that QTableWidget creates instances of your subclass." How does this actually work as you can pass any of the QTableWidgetItem subclass pointer to setItemPrototype and at run time there is no way you can get the size of an object having just pointer to it?

    Read the article

  • Query to find all the nodes that are two steps away from a particular node.

    - by iecut
    Suppose I have two columns in a table that represents a graph, the first column is a FROMNODE and second one is TONODE. What I would like to know is that how will we find all the nodes that are two steps away from a particular node. Lets suppose I have a node numbered '1' and i would like to know all the nodes that are two steps away from it. I have tried(I am assuming the table name as graph) SELECT FROMNODE FROM GRAPH WHERE TONODE=1 (this is to select all the nodes that are connected to node 1, but I couldn't figure out how would I find all the nodes that are two steps away from node 1??)

    Read the article

  • JPA entitymanager remove operation is not performant

    - by Samuel
    When I try to do an entityManager.remove(instance) the underlying JPA provider issues a separate delete operation on each of the GroupUser entity. I feel this is not right from a performance perspective, since if a Group has 1000 users there will be 1001 calls issued to delete the entire group and itr groupuser entity. Would it make more sense to write a named query to remove all entries in groupuser table (e.g. delete from group_user where group_id=?), so I would have to make just 2 calls to delete the group. @Entity @Table(name = "tbl_group") public class Group { @OneToMany(mappedBy = "group", cascade = CascadeType.ALL, fetch = FetchType.LAZY) @Cascade(value = DELETE_ORPHAN) private Set<GroupUser> groupUsers = new HashSet<GroupUser>(0);

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • How do you update the aspnetdb membership IsApproved value?

    - by Matt
    I need to update existing users IsApproved status in the aspnet_Membership table. I have the code below that does not seem to be working. The user.IsApproved property is updated but it is not saving it to the database table. Are there any additional calls I need to make? Any suggestions? Thanks. /// <summary> /// Updates a users approval status to the specified value /// </summary> /// <param name="userName">The user to update</param> /// <param name="isApproved">The updated approval status</param> public static void UpdateApprovalStatus(string userName, bool isApproved) { MembershipUser user = Membership.GetUser(userName); if (user != null) user.IsApproved = isApproved; }

    Read the article

  • What does SQL Server execution plan show?

    - by tim
    There is the following code: declare @XmlData xml = '<Locations> <Location rid="1"/> </Locations>' declare @LocationList table (RID char(32)); insert into @LocationList(RID) select Location.RID.value('@rid','CHAR(32)') from @XmlData.nodes('/Locations/Location') Location(RID) insert into @LocationList(RID) select A2RID from tblCdbA2 Table tblCdbA2 has 172810 rows. I have executed the batch in SSMS with “Include Actual execution plan “ and having Profiler running. The plan shows that the first query cost is 88% relative to the batch and the second is 12%, but the profiler says that durations of the first and second query are 17ms and 210 ms respectively, the overall time is 229, which is not 12 and 88.. What is going on? Is there a way how I can determine in the execution plan which is the slowest part of the query?

    Read the article

  • HQL query for entity with max value

    - by Rob
    I have a Hibernate entity that looks like this (accessors ommitted for brevity): @Entity @Table(name="FeatureList_Version") @SecondaryTable(name="FeatureList", pkJoinColumns=@PrimaryKeyJoinColumn(name="FeatureList_Key")) public class FeatureList implements Serializable { @Id @Column(name="FeatureList_Version_Key") private String key; @Column(name="Name",table="FeatureList") private String name; @Column(name="VERSION") private Integer version; } I want to craft an HQL query that retrieves the most up to date version of a FeatureList. The following query sort of works: Select f.name, max(f.version) from FeatureList f group by f.name The trouble is that won't populate the key field, which I need to contain the key of the record with the highest version number for the given FeatureList. If I add f.key in the select it won't work because it's not in the group by or an aggregate and if I put it in the group by the whole thing stops working and it just gives me every version as a separate entity. So, can anybody help?

    Read the article

  • Counting multiple entries in a MySQL database?

    - by Aaron
    Hi all, I'm trying to count multiple entries in a MySQL database, I know how to use COUNT(), but the exact syntax I want to get the results I need eludes me. The problem: Table structure: ID, CODE, AUTHOR, COUNTRY, TIMESTAMP. Code, Author and Country overlap many times in the table. I am trying to discover if there is one simple query that can be ran to return (using WHERE clause on COUNTRY) the author field, the code field, and then a final field that counts the number of times the CODE was present in the query result. So, theoretically I could end up with an array like: array('author', 'code', 'codeAppearsNTimes'); Authors also have varying codes associated with them, so I don't want the results merged. I suppose the end result would be: 'This author is associated with this code this many times'. Is this possible with MySQL? Thanks in advance.

    Read the article

  • Placing & deleting element(s) from a object (stack)

    - by Chris
    Hello, Initialising 2 stack objects: Stack s1 = new Stack(), s2 = new Stack(); s1 = 0 0 0 0 0 0 0 0 0 0 (array of 10 elements wich is empty to start with) top:0 const int Rows = 10; int[] Table = new int[Rows]; public void TableStack(int[] Table) { for (int i=0; i < Table.Length; i++) { } } My question is how exactly do i place a element on a stack (push) or take a element from the stack (pop) as the following: Push: s1.Push(5); // s1 = 5 0 0 0 0 0 0 0 0 0 (top:1) s1.Push(9); // s1 = 5 9 0 0 0 0 0 0 0 0 (top:2) Pop: int number = s1.Pop(); // s1 = 5 0 0 0 0 0 0 0 0 0 0 (top:1) 9 got removed Do i have to use get & set, and if so how exactly do i implent this with a array? Not really sure what to exactly use for this. Any hints highly appreciated. The program uses the following driver to test the Stack class (wich cannot be changed or modified): public void ExecuteProgram() { Console.Title = "StackDemo"; Stack s1 = new Stack(), s2 = new Stack(); ShowStack(s1, "s1"); ShowStack(s2, "s2"); Console.WriteLine(); int getal = TryPop(s1); ShowStack(s1, "s1"); TryPush(s2, 17); ShowStack(s2, "s2"); TryPush(s2, -8); ShowStack(s2, "s2"); TryPush(s2, 59); ShowStack(s2, "s2"); Console.WriteLine(); for (int i = 1; i <= 3; i++) { TryPush(s1, 2 * i); ShowStack(s1, "s1"); } Console.WriteLine(); for (int i = 1; i <= 3; i++) { TryPush(s2, i * i); ShowStack(s2, "s2"); } Console.WriteLine(); for (int i = 1; i <= 6; i++) { getal = TryPop(s2); //use number ShowStack(s2, "s2"); } }/*ExecuteProgram*/ Regards.

    Read the article

  • dump csv from sqlalchemy

    - by afilatun
    For some reason, I want to dump a table from a database (sqlite3) in the form of a csv file. I'm using a python script with elixir (based on sqlalchemy) to modify the database. I was wondering if there is any way to dump the table I use to csv. I've seen sqlalchemy serializer but it doesn't seem to be what I want. Am I doing it wrong? Should I call the sqlite3 python module after closing my sqlalchemy session to dump to a file instead? Or should I use something homemade?

    Read the article

  • JQuery JQGrid local data loading issue

    - by ollie314
    Hi, I've got a problem with the following code <script type="text/javascript"> var mydata = [ {id:"1",name:"foo"},{id:"2",name:"bar"} ]; jQuery(document).ready(function() { jQuery("#lgrid").jqGrid({ data: mydata, datatype: "local", height: 150, width:600, rowNum: 10, rowList: [10,20,30], colNames:['id','name'], colModel:[ {name:'id',index:'id', width:60, sorttype:"int"}, {name:'name',index:'name', width:60}], pager: "#pgrid", viewrecords: true, caption: "Contacts" }); }); </script> And In the body .... <table id="lgrid"></table> <div id="pgrid"></div> With this code, I never display the data into the grid. Somebody has an idea about this issue ? Thanks.

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >