Search Results

Search found 11543 results on 462 pages for 'partition wise join'.

Page 190/462 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • Show the specific field on mysql table based on active date

    - by mrjimoy_05
    Suppose that I have 3 tables: A) Table UsrHeader ----------------- UsrID | UsrName ----------------- 1 | Abc 2 | Bcd B) Table UsrDetail ------------------------------- UsrID | UsrLoc | Date ------------------------------- 1 | LocA | 10 Aug 2012 1 | LocB | 15 Aug 2012 2 | LocA | 10 Aug 2012 C) Table Trx ----------------------------- TrxID | TrxDate | UsrID ----------------------------- 1 | 10 Aug 2012 | 1 2 | 16 Aug 2012 | 1 3 | 11 Aug 2012 | 2 What I want to do is to show the table like: --------------------------------------- TrxID | TrxDate | UsrID | UsrLoc --------------------------------------- 1 | 10 Aug 2012 | 1 | LocA 2 | 16 Aug 2012 | 1 | LocB 3 | 11 Aug 2012 | 2 | LocA Notice that there is one user but different location. That's based on the UsrDetail table that the user on a specified date has moved to another location. So, it should be show the user specific location on that date on every transaction. I have try this code but it is no luck: SELECT trx.TrxID, trx.TrxDate, trx.UsrID, User.UsrName, User.UsrLoc FROM trx INNER JOIN ( SELECT UsrHeader.UsrID, UsrHeader.UsrName, UserDetail.UsrLoc FROM UsrHeader INNER JOIN ( SELECT UsrDetail.UsrID, UsrDetail.UsrLoc, UsrDetail.Date FROM UsrDetail ) AS UserDetail ON UserDetail.UsrID = UsrHeader.UsrID ) AS User ON User.UsrID = trx.UsrID AND trx.TrxDate >= User.Date How to do that? Thanks..

    Read the article

  • No such file or directory python in linux only (coming from windows)

    - by user1804633
    I have the same exact directory structure within a folder in Windows & in Linux (Debian) - where the script is along the static + dataoutput folders How come the following code works fine in Windows, but gives a no such file or directory path error in linux? @app.route('/_getdataoutputfilelisting') def getdataoutputfilelisting(): listoffilesindataouput = getfiles('static/dataoutput') return jsonify(listoffiles = listoffilesindataouput) def getfiles(dirpath): a = [s for s in os.listdir(dirpath) if os.path.isfile(os.path.join(dirpath, s))] a.sort(key=lambda s: os.path.getmtime(os.path.join(dirpath, s))) a.reverse() return a Is there a way to make it universal such that it works in both OSs? Thanks

    Read the article

  • Delete rows out of table that is innerjoined and unioned with 2 others

    - by jonathan
    We have 3 tables (table1, table2, table3), and I need to delete all the rows from table1 that have the same ID in table2 OR table3. To see a list of all of these rows I have this code: ( select table2.ID, table2.name_first, table2.name_last, table2.Collected from table2 inner join table1 on table1.ID = table2.ID where table2.Collected = 'Y' ) union ( select table3.ID, table3.name_first, table3.name_last, table3.Collected from table3 inner join table1 on table1.ID = table3.ID where table3.Collected = 'Y' ) I get back about 200 rows. How do I delete them from table1? I don't have a way to test if my query will work, so I'm nervous about modifying something I found online and potentially deleting data (we do have backups, but I'd rather not test out their integrity). TIA!

    Read the article

  • What will or won't cause a thread to block (a question from a test)

    - by fingerprint211b
    I've had a test, and there was a question I lost some points on, because I wasn't able to answer it : Which of the following is NOT a condition which can cause a thread to block : Calling an objects's wait() method Waiting for an I/O operation Calling sleep() Calling yield() Calling join() As far as I know, all of these are blocking calls : wait() returns when an something calls notify(), blocks until then If the thread is WAITING for an I/O operation then it's obviously blocked sleep(), obviously, blocks until the time runs out, or something wakes up the thread yield() "cancels the rest of the thread's timeslice" (lacking a better term), and returns only when the thread is active again join() blocks until the thread it's waiting for terminates. Am I missing something here?

    Read the article

  • Codeigniter: how do I select count when `$query->num_rows()` doesn't work for me?

    - by mOrloff
    I have a query which is returning a sum, so naturally it returns one row. I need to count the number of records in the DB which made that sum. Here's a sample of the type of query I am talking about (MySQL): SELECT i.id, i.vendor_quote_id, i.product_id_requested, SUM(i.quantity_on_hand) AS qty, COUNT(i.quantity_on_hand) AS count FROM vendor_quote_item AS i JOIN vendor_quote_container AS c ON i.vendor_quote_id = c.id LEFT JOIN company_types ON company_types.company_id = c.company_id WHERE company_types.company_type = 'f' AND i.product_id_requested = 12345678 I have found and am now using the select_min(), select_max(), and select_sum() functions, but my COUNT() is still hard-coded in. The main problem is that I am having to specify the table name in a tightly coupled manner with something like $this->$db->select( 'COUNT(myDbPrefix_vendor_quote_item.quantity_on_hand) AS count' ) which kills portability and makes switching environments a PIA. How can/should I get my the count values I am after with CI in an uncoupled way??

    Read the article

  • mysql query for change in values in a logging table

    - by kiasectomondo
    I have a table like this: Index , PersonID , ItemCount , UnixTimeStamp 1 , 1 , 1 , 1296000000 2 , 1 , 2 , 1296000100 3 , 2 , 4 , 1296003230 4 , 2 , 6 , 1296093949 5 , 1 , 0 , 1296093295 Time and index always go up. Its basically a logging table to log the itemcount each time it changes. I get the most recent ItemCount for each Person like this: SELECT * FROM table a INNER JOIN ( SELECT MAX(index) as i FROM table GROUP BY PersonID) b ON a.index = b.i; What I want to do is get get the most recent record for each PersonID that is at least 24 hours older than the most recent record for each Person ID. Then I want to take the difference in ItemCount between these two to get a change in itemcount for each person over the last 24 hours: personID ChangeInItemCountOverAtLeast24Hours 1 3 2 -11 3 6 Im sort of stuck with what to do next. How can I join another itemcount based on latest adjusted timestamp of individual rows?

    Read the article

  • amazon cloud vs rackspace cloud

    - by machaa
    Hi, I'm looking to take a dedicated server - in the process I read about Amazon Cloud computing & Rackspace Cloud Servers. Now I'm not sure which one to opt? Could somebody suggest - Performance & Price wise. Regards

    Read the article

  • DNSSEC - What doesn't it cover?

    - by KP65
    I'm currently revising for an exam to do with DNS/DNSSEC. While I know DNSSEC provides various security enhancements for DNS, I would like to dive a bit deeper(for my own thirst for knowledge!) and would like to know what is still problematic security wise even after DNSSEC is employed? After all it can't have solved all programs DNS was having with regards to security, right? Thanks

    Read the article

  • How to make Tun module running at linux start

    - by harmony
    i installed Tun using: modprobe tun then did: lsmod | grep tun tun 83840 0 Please how to make Tun running at reboot? This is written on Hamachi website: ...Then add tun to the list of modules by using your favorite text editor and Create /etc/modules-load.d/tun.conf #Load tun module at boot. tun But this folder foes not exist in my /etc Is it wise to add line "modprobe tun" into /etc/rc.local ?

    Read the article

  • How do I remove Xen kernel and put normal kernel on RHEL 5

    - by yan bellavance
    I have 3 identical machines (hardware wise) that all have RHEL 5.3 installed. 2 of those machines have the Xen kernel and one doesnt. I cannot install nvidia drivers on the ones that have the xen kernel and so I was wondering how I managed to do this and how to replace them with normal kernels. Could this of happened during install time when for example I was queried on certain components to install? (development,virtualization, webserver)

    Read the article

  • Investigate high load on RHEL

    - by Adam Matan
    One of my RHEL 5 was showing high load (~4-5) on uptime. The load increased, and when it reached 6 (±), the server froze and needed restart. top-wise, the server had no significant CPU or memory issues and sar showed no increase in iowait. Therefore, the thrashing must have been related to other factors. Any ideas how to investigate this? In particular, how do I know that which processes are waiting in the queue?

    Read the article

  • How to recover data files from xampp-windows to xampp-linux after crash?

    - by David Buehler
    My Windows box died after I developed a database in xampp on it; fortunately I have a backup of the entire F:/TestWeb/Xampp partition. Unfortunately, I did not do an Export (nor dump) of the "Lws2" database before the crash. I have replaced the defunct machine with one running Mint7 (based on Ubuntu 9.04 "Jaunty Jackalope") and installed xampp-linux into the /opt partition, so the new xampp now runs fine in /opt/lampp, and says all the elements are secured by passwords (which I just assigned during this installation.) I assumed that Xamp-Windows installed in November would migrate easily to xampp-linux installed iin February -- a bad assumption. It apparently would have been simple if I had known enough to do an Export or a Dump before the crash, but.... The backup was done to a Network Attached Storage drive, which is formatted as "vfat" so the backup does not carry with it any valid ownership permissions from MySql on NTFS. I now see from my backup that the old data resided in \TestWeb\Xampp\Mysql\Data\Lws2\ and consists of 7 ".frm" files which define my tables. The actual data -- I suppose a ".sql" file or files -- has disappeared, and I am resigning myself to two days of retyping it. But I do not wish to do the table layouts all over again. So I copied Data tree to /opt/lampp/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/Lws2 -- PhpMyAdmin does not see it. So I copied Data tree to /opt/lampp/var/mysql/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/var/mysql/Lws2 -- PhpMyAdmin does not see it. So I adjusted all the permissions to stop saying owner "nobody" to owner "root" and gave full permissions to all groups and to all others, with permissions percolating down, in all 4 trees. You guessed it -- PhpMyAdmin does not see any database named Lws2, only its 4 default ones. I double-checked the permissions and rebooted Linux and repeated the tests. At some point in that process I did see PhpMyAdmin showing "lws2(7)" but when I clicked on it I saw a "no table found" message. I have not been able to recreate that experience. Apparently there are some setup files for MySql and for PhpMyAdmin which need to be set up by running a wizard or two or by editing the files directly. I grepped the TestWeb tree and found an old "ldir = "C:TestWeb\Xampp\MySql\" and a "DataDir = C:TestWeb\Xampp\MySql\" in a .php file and in a .bat file, but I cannot find the corresponding config file names on the /opt partition/ -- so it looks as if these wizards have not been run to create them. What config files files does Linux use to setup MySql config files for PhpMyAdmin? What wizards do I need to run to point the MySql engine and the PhpMyAdmin at the folder /opt/lampp/data/ with its lws2 folder inside it? Or which files do I need to edit, with a sample of what it normally says under Linux? Incidentally, I remember I converted from MyISAM with its .MYD and .MYI files to InnoDB after entering only a small amount of the data -- and I do not know what file types to look for -- perhaps my data is still there but under another guise or in another place? Is it something as simple as linux needing to see "/data/" instead of /Data? I will check that out while waiting for a response. If anyone can point me to documentation that discusses this level of detail -- I will read it avidly! In any case, thanks for any clarification you can give on this thorny problem. wizdum

    Read the article

  • Cloud Providers that support FreeBSD?

    - by Jed Daniels
    I'm looking for recommendations from the wise and all-knowing Server Fault community on cloud hosting providers that support running FreeBSD. Ideally ones that don't require special tweaks to the FreeBSD system, but any recommendations would be appreciated. Suggestions? Recommendations? Advice? Tips? War stories? Thanks in advance.

    Read the article

  • Apply Local Policy to Terminal Server (Windows Server 2008 R2 Standard - Workgroup )

    - by Param
    I have created 5 local user in Window server 2008 R2 std in workgroup. This server is also Terminal Server. Is it possible to apply local user Policy (gpedit.msc ) per user wise? I want to accomplish the following task 1) Restrict some control panel item as per user 2) Software Restriction as per user 3) Hide Administrative tools in control panel and start menu, only to user which has user rights 4) User must not be able to see other user data Thanks & Regards, Param

    Read the article

  • Password recovery toolkit

    - by John Craggs
    I am using Wise Password Recover 2009 and basically satisfied with its wide compatibility. But it gets failed in retrieving one of my outlook accounts. Is there any other password recovery toolkit can do the recovery for me?

    Read the article

  • RHCS: GFS2 in A/A cluster with common storage. Configuring GFS with rgmanager

    - by Pavel A
    I'm configuring a two node A/A cluster with a common storage attached via iSCSI, which uses GFS2 on top of clustered LVM. So far I have prepared a simple configuration, but am not sure which is the right way to configure gfs resource. Here is the rm section of /etc/cluster/cluster.conf: <rm> <failoverdomains> <failoverdomain name="node1" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n1"/> </failoverdomain> <failoverdomain name="node2" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n2"/> </failoverdomain> </failoverdomains> <resources> <script file="/etc/init.d/clvm" name="clvmd"/> <clusterfs name="gfs" fstype="gfs2" mountpoint="/mnt/gfs" device="/dev/vg-cs/lv-gfs"/> </resources> <service name="shared-storage-inst1" autostart="0" domain="node1" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> <service name="shared-storage-inst2" autostart="0" domain="node2" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> </rm> This is what I mean: when using clusterfs resource agent to handle GFS partition, it is not unmounted by default (unless force_unmount option is given). This way when I issue clusvcadm -s shared-storage-inst1 clvm is stopped, but GFS is not unmounted, so a node cannot alter LVM structure on shared storage anymore, but can still access data. And even though a node can do it quite safely (dlm is still running), this seems to be rather inappropriate to me, since clustat reports that the service on a particular node is stopped. Moreover if I later try to stop cman on that node, it will find a dlm locking, produced by GFS, and fail to stop. I could have simply added force_unmount="1", but I would like to know what is the reason behind the default behavior. Why is it not unmounted? Most of the examples out there silently use force_unmount="0", some don't, but none of them give any clue on how the decision was made. Apart from that I have found sample configurations, where people manage GFS partitions with gfs2 init script - https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Defining_The_Resources or even as simply as just enabling services such as clvm and gfs2 to start automatically at boot (http://pbraun.nethence.com/doc/filesystems/gfs2.html), like: chkconfig gfs2 on If I understand the latest approach correctly, such cluster only controls whether nodes are still alive and can fence errant ones, but such cluster has no control over the status of its resources. I have some experience with Pacemaker and I'm used to that all resources are controlled by a cluster and an action can be taken when not only there are connectivity issues, but any of the resources misbehave. So, which is the right way for me to go: leave GFS partition mounted (any reasons to do so?) set force_unmount="1". Won't this break anything? Why this is not the default? use script resource <script file="/etc/init.d/gfs2" name="gfs"/> to manage GFS partition. start it at boot and don't include in cluster.conf (any reasons to do so?) This may be a sort of question that cannot be answered unambiguously, so it would be also of much value for me if you shared your experience or expressed your thoughts on the issue. How does for example /etc/cluster/cluster.conf look like when configuring gfs with Conga or ccs (they are not available to me since for now I have to use Ubuntu for the cluster)? Thanks you very much!

    Read the article

  • Steps to diagnose performance bottlenecks on Mac OS X

    - by Dave Cahill
    If you wanted to track down performance issues on a machine running Mac OS X and find out what was causing slowdowns, which command-line or graphical tools would you use, and how would you use them? I'm interested in advice on the best tools, and explanations of how to use them - when a machine slows down or freezes up, I'd like to be able to dig down and understand what's going on, memory / disk / CPU-wise. Thanks.

    Read the article

  • How much does HDD cache matter with Linux softraid?

    - by Jawa
    I'm in a process of renewing/expanding my disk sets, but not quite sure what kind of disks to get, cache-wise. What difference does disk cache amount of 16/32/64MB do, in capacities of, say, 1/1.5/2TB SATA disks? The disks will be used in a webapp server and in a media workstation, with Linux's softraid in raid-1/raid-5 configurations. Note, that as both purposes are purely for a hobby, the pricetag for a dozen of disks is a big issue.

    Read the article

  • Does the size of monitor Matters ?

    - by Arsheep
    I have a old computer , i want to buy a big LCD now the best i can found is Viewsonic's 24" lcd TFT monitor . So will it run without any problems or i need to upgrade the video cards or something too ? The computer is not that much old it has P4 bord and celeron processor with 128 graphics memory . And in properties it shows i can maximum use 1280 x 1024 resolution. I am noob hardware wise So need help on this stuff. Thanks

    Read the article

  • Does the size of monitor Matters ?

    - by Arsheep
    I have a old computer , i want to buy a big LCD now the best i can found is Viewsonic's 24" lcd TFT monitor . So will it run without any problems or i need to upgrade the video cards or something too ? The computer is not that much old it has P4 bord and celeron processor with 128 graphics memory . And in properties it shows i can maximum use 1280 x 1024 resolution. I am noob hardware wise So need help on this stuff. Thanks

    Read the article

  • Can spliting an access database cause printer and reporting issues?

    - by leeand00
    We have a setup in which our users log into an access database using MS Access 2003 over an RDP connection. The user's login to their own machines first using a roaming profile. They then click an rdp connection file on the desktop and login to the remote server, via RDP, where they use MS Access as the shell; they don't have any access to any of explorer.exe features such as the start menu. The database they are logging into is more of an application, and provides functionality for entering data, querying data, and running reports via form based menus. It all worked pretty well until we split the database as it was nearing 2GBs in size. We moved out the payroll data into a separate partition, a database with the same name in a different folder, both of them on the server. Only two tables were moved into this new database partition, and they were re-linked as external tables in the new partition. Now while everything appears to be working fine data-wise after the split, there's a new issue when our users login via RDP and attempt to run reports: often the report will not display and instead the user sees an error about the click event of the form. At first I didn't even know it was printer-related, as we didn't really change anything related to the printers as far as I knew. Confused about the error, I talked to the guy who previously worked here and who was in charge of splitting the database, and he told me to tell the users to set their default printers (on their local machines, not on the server) to the "printer" Microsoft XPS Document Writer which isn't a physical printer at all. This allowed the user's to display their reports, but if they want to print out reports, they are required to go to the File menu and select Print, clicking the print icon on the toolbar takes them to a Save As... dialog as would be expected when using the Microsoft XPS Document Writer as your default printer. It's easy to tell if the user is having a problem because a quick mouseover of the printer icon will yield a tooltip of (none) when they cannot access their reports, and a tooltip of Microsoft XPS Document Writer when they can view the reports. If the user's printer is set to anything other than Microsoft XPS Document Writer as the default on their local machine, then (none) is always displayed when they rdp to the database. The RDP settings are setup to transfer the local printer to the server. Telling the users to do this to print has been more of a band-aid on the whole situation until we find a better solution and an explanation as to why splitting a database would prevent users from printing or even viewing access database reports. Which is why I'm here asking this question. Also of note all the printers on the network now show up on the server so that when the users do click File->Print to print their reports on a physical printer, they have to look through a huge list of printers to find theirs in the dropdown. So the little band-aid fix we have is not ideal. Previously, only the printers on the user's local machine displayed here, and not all the printers on the network. My co-worker seems to think this has something to do with permissions, I personally think it has to do with roaming profiles, and Group Policies which is what I've been reading up on. I really don't know how to fix this or how it is related to splitting the database.

    Read the article

  • Outlook accepting meetings on behalf of?

    - by user14714
    A couple of my users are having a problem where they will accept a meeting request, but the accept notice sent to the meeting coordinator says, "Accepted on behalf of X user by Y user." I have triple checked the settings for the permissions, and none of the people accepting on behalf of have access. (Not that they are actually doing the accepting anyways.) We are currently using an Exchage2003 server with Office 2007. OS wise it's XP pro SP3.

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >