Search Results

Search found 10384 results on 416 pages for 'plan cache'.

Page 224/416 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • MySQL query very slow on Amazon RDS but really fast on my laptop?

    - by Luc
    I would love to know if anybody knows why this is happening. i've just migrated over to Amazon RDS for our website and our biggest query which takes .2 seconds to execute on my macbook takes 1.3 seconds to execute on the most expensive RDS instance. Obviously i've disabled query cache (and tested this) on my local computer and both databases are exactly the same. InnoDB, both have the same indexes etc. It's costing us a fortune ($2000 per month) for the fastest RDS instance and i'm losing faith quickly. any ideas?

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • How to copy all recovery points to another drive? - Norton Ghost

    - by chobo2
    Hi I have Norton Ghost and I have 2 drives dedicated to backing up my files. One is an external drive the other is internal. Now my internal drive has filled up with backups and I now want to copy all those backups to my external drive. However it seems to want me to do it one by one. Can I do it like a batch or some mass copy so it does one after another so I can say have my computer all night on? My plan was to fill up my internal drive - copy it to my external drive - fill up my external drive - copy it to my external drive. Once my external drive would be filled up I then would start deleting the oldest backups but that probably would give me like 6 months of backs that I can back through if I would ever need too.

    Read the article

  • 10 gigabit or 1 gigabit switch

    - by Guntis
    We are planning to move mysql to dedicated box. At this moment we have web servers and mysql is running on each. Question is: cheaper is to buy 10G switch and put 10G network card into mysql server. Or buy normal gigabit switch and connect mysql box to switch with multiple network cables. In 1G scenario then we give each web server different mysql IP address. I don't think, that mysql box with one 1G link is enough to to satisfy multiple web box mysql traffic. At this moment we have 3 servers witch are running mysql/web. Plan is to add fourth server for mysql only. Thanks. Edit: if we buy 1G switch with mini-GBIC ports. Can we put in mini-GBIC 10G connectors and then connect mysql box to that port?

    Read the article

  • Stability, x86 Vs Sparc

    - by Jason T
    Our project are plan to migrate from Sparc to x86, and our HA requirement is 99.99%, previous on Sparc, we assume the hardware stability would like, hardware failure every 4 month or even one year, and also we have test data for our application, then we have requirement for each unplanned recovery (fail over) to achieve 99.99% (52.6 minutes unplanned downtime per year). But since we are going to use Intel x86, it seems the hardware stability is not so good as Sparc, but we don't have the detail data. So compare with Sparc, how about the stability of the Intel x86, should we assume we have more unplanned downtime? If so, how many, double? Where I can find some more detail of this two type of hardware?

    Read the article

  • Deleting large no of files on linux eats up CPU

    - by Sanjay
    I generate more than 50GB of cache files on my RHEL server (and typical file size is 200kb so no of files is huge). When I try to delete these files it takes 8-10 hours. However, the bigger issue is that the system load goes to critical for these 8-10 hours. Is there anyway where I can keep the system load under control during the deletion. I tried using nice -n19 rm -rf * but that doesn't help in system load. P.S. I asked the same question on superuser.com but didn't get a good enough answer so trying here.

    Read the article

  • Server hung with "blocked for more than 120 seconds", diskless

    - by alterpub
    I have server which hung every 2-5 days. dmesg show following situation: kernel: [490894.231753] INFO: task munin-html:10187 blocked for more than 120 seconds. kernel: [490894.231799] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kernel: [490894.231843] munin-html D 0000000000000000 0 10187 9796 0x00000000 kernel: [490894.231878] ffff88063174b968 0000000000000082 ffff88063174b8d8 0000000000015b80 kernel: [490894.231930] ffff88063174bfd8 0000000000015b80 ffff88063174bfd8 ffff88062fe644d0 kernel: [490894.231982] 0000000000015b80 0000000000015b80 ffff88063174bfd8 0000000000015b80 kernel: [490894.232033] Call Trace: kernel: [490894.232059] [<ffffffff8117fce0>] ? sync_buffer+0x0/0x50 kernel: [490894.232089] [<ffffffff815a1f13>] io_schedule+0x73/0xc0 kernel: [490894.232115] [<ffffffff8117fd25>] sync_buffer+0x45/0x50 kernel: [490894.232143] [<ffffffff815a258f>] __wait_on_bit+0x5f/0x90 kernel: [490894.232170] [<ffffffff8117fce0>] ? sync_buffer+0x0/0x50 kernel: [490894.232197] [<ffffffff815a2638>] out_of_line_wait_on_bit+0x78/0x90 kernel: [490894.232227] [<ffffffff81080250>] ? wake_bit_function+0x0/0x40 kernel: [490894.232255] [<ffffffff8117fcd6>] __wait_on_buffer+0x26/0x30 kernel: [490894.232288] [<ffffffffa00131be>] squashfs_read_data+0x1be/0x520 [squashfs] kernel: [490894.232320] [<ffffffff8114f0f1>] ? __mem_cgroup_try_charge+0x71/0x450 kernel: [490894.232350] [<ffffffffa0013963>] squashfs_cache_get+0x1c3/0x320 [squashfs] kernel: [490894.232381] [<ffffffffa00136eb>] ? squashfs_copy_data+0x10b/0x130 [squashfs] kernel: [490894.232426] [<ffffffff815a3dbe>] ? _raw_spin_lock+0xe/0x20 kernel: [490894.232454] [<ffffffffa0013b68>] ? squashfs_read_metadata+0x48/0xf0 [squashfs] kernel: [490894.232499] [<ffffffffa0013ae1>] squashfs_get_datablock+0x21/0x30 [squashfs] kernel: [490894.232544] [<ffffffffa0015026>] squashfs_readpage+0x436/0x4a0 [squashfs] kernel: [490894.232575] [<ffffffff8111a375>] ? __inc_zone_page_state+0x35/0x40 kernel: [490894.232606] [<ffffffff8110d072>] __do_page_cache_readahead+0x172/0x210 kernel: [490894.232636] [<ffffffff8110d131>] ra_submit+0x21/0x30 kernel: [490894.232662] [<ffffffff811045f3>] filemap_fault+0x3f3/0x450 kernel: [490894.232691] [<ffffffff812bd156>] ? prio_tree_insert+0x256/0x2b0 kernel: [490894.232726] [<ffffffffa009225d>] aufs_fault+0x11d/0x170 [aufs] kernel: [490894.232755] [<ffffffff8111f6d4>] __do_fault+0x54/0x560 kernel: [490894.232782] [<ffffffff81122f39>] handle_mm_fault+0x1b9/0x440 kernel: [490894.232811] [<ffffffff811286f5>] ? do_mmap_pgoff+0x335/0x380 kernel: [490894.232840] [<ffffffff815a7af5>] do_page_fault+0x125/0x350 kernel: [490894.232867] [<ffffffff815a4675>] page_fault+0x25/0x30 Os info cat /etc/issue.net Ubuntu 10.04.2 LTS uname -a Linux Shard1Host3 2.6.35-32-server #68~lucid1-Ubuntu SMP Wed Mar 28 18:33:00 UTC 2012 x86_64 GNU/Linux This system load via ltsp and hasn't harddrives also it has a lot of memory 24Gb. free -m total used free shared buffers cached Mem: 24152 17090 7061 0 50 494 -/+ buffers/cache: 16545 7607 Swap: 0 0 0 I put vmstat info here but I think it won't give results after reboot vmstat 1 30 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 7231156 52196 506400 0 0 0 0 172 143 7 0 92 0 1 0 0 7231024 52196 506400 0 0 0 0 7859 16233 5 0 94 0 0 0 0 7231024 52196 506400 0 0 0 0 7870 16446 2 0 98 0 0 0 0 7230900 52196 506400 0 0 0 0 7308 15661 5 0 95 0 0 0 0 7231100 52196 506400 0 0 0 0 7960 16543 6 0 94 0 0 0 0 7231100 52196 506400 0 0 0 0 7542 16047 5 1 94 0 3 0 0 7231100 52196 506400 0 0 0 0 7709 16621 3 0 96 0 0 0 0 7231220 52196 506400 0 0 0 0 7857 16552 4 0 96 0 0 0 0 7231220 52196 506400 0 0 0 0 7192 15491 6 0 94 0 0 0 0 7231220 52196 506400 0 0 0 0 7423 15792 5 1 94 0 1 0 0 7231260 52196 506404 0 0 0 0 7686 16296 2 0 98 0 0 0 0 7231260 52196 506404 0 0 0 0 6976 15183 5 0 95 0 0 0 0 7231260 52196 506404 0 0 0 0 7303 15600 4 0 95 0 0 0 0 7231320 52196 506404 0 0 0 0 7967 16241 1 0 98 0 0 0 0 7231444 52196 506404 0 0 0 0 6948 15113 6 0 94 0 0 0 0 7231444 52196 506404 0 0 0 0 7931 16181 6 0 94 0 1 0 0 7231516 52196 506404 0 0 0 0 7715 15829 6 0 94 0 0 0 0 7231516 52196 506404 0 0 0 0 7771 16036 2 0 97 0 0 0 0 7231268 52196 506404 0 0 0 0 7782 16202 6 0 94 0 1 0 0 7231212 52196 506404 0 0 0 0 7457 15622 4 0 96 0 0 0 0 7231212 52196 506404 0 0 0 0 7573 16045 2 0 98 0 2 0 0 7231216 52196 506404 0 0 0 0 7689 16076 6 0 94 0 0 0 0 7231424 52196 506404 0 0 0 0 7429 15650 4 0 95 0 3 0 0 7231424 52196 506404 0 0 0 0 7534 16168 3 0 97 0 1 0 0 7230548 52196 506404 0 0 0 0 8559 15926 7 1 92 0 0 0 0 7230672 52196 506404 0 0 0 0 7720 15905 2 0 98 0 0 0 0 7230548 52196 506404 0 0 0 0 7677 16313 5 0 95 0 1 0 0 7230676 52196 506404 0 0 0 0 7209 15432 5 0 95 0 0 0 0 7230800 52196 506404 0 0 0 0 7522 15861 2 0 98 0 0 0 0 7230552 52196 506404 0 0 0 0 7760 16661 5 0 95 0 In the munin(monitoring system) graphs I see(before server hung): Disk(nbd0) IOs per device: read: 289m but avg by week 2.09m Disk(nbd0) throughput per device: read: 4.73k but avg by week 108.76 Disk(nbd0) utilization per device: 100% but avg by week 1.2% Eth0 traffic was low: in/out only 2Mbps Number of threads increased to 566 usually 392 Fork rate 1.08 but usually 2.82 VMStat(processes state) Increased to 17.77(from 0 as far as I could see in the graph) CPU usage iowait 880.46% when usually 6.67% It'll be great if somebody help me to understand what's up.

    Read the article

  • How to get the Three.js import/export scripts into Blender on Ubuntu?

    - by Bane
    I have been working with 3D primitives in Three.js, but now I want to import some models. I plan on using Blender, which I have just downloaded with: sudo apt-get install blender However, I was instructed to put the import/export scripts in the .blender/2.62/scripts/addons folder, but it does not exist! .blender/2.62 does exist, but it only has a config folder. The next thing I did is manually changed the script search path in Blender's preferences from // to my homefolder/scripts, which contained the required io_mesh_threejs folder (which, in turn had the .py scripts inside). I saved the changes, restarted Blender, but still nothing: in the menu there is no mention of Three.js at all! What do I do? It would be great if I knew the installation path for Blender, because maybe I could put those scripts there manually. Where should it be installed? EDIT: these are the scripts I'm talking about, along with the instructions: https://github.com/mrdoob/three.js/tree/master/utils/exporters/blender.

    Read the article

  • DC on Hyper V Host

    - by Saif Khan
    I've read a few similar questions but wasn't clear. I have a small office (13) users and recently purchased a new single server (this is all I have to work with for now). Server specs Memory - 16GB Drives - 6 SCSI (2TB) PROCESSOR - DUAL QUAD I plan on making the Hyper-V host the DC and then 2 VMs, one for a file server and the other an application server. Question - since I am restricted to this single server, would it be a big issue making the Hyper-V host the domain controller? Your input greatly appreciated.

    Read the article

  • How can I sync Nokia smartphones' calendars with Snow Leopard Server's iCal Server?

    - by Joe Carroll
    I've just installed and configured a Mac Mini Server for a customer who wanted to stop using Google Apps for their email. The plan was to also use the new server's calendaring service but we've hit a small snag: the staff all use iCal on Macs and Nokia E71s and there doesn't seem to be a way to use a single calendar for each person that syncs with both. iSync/iCal doesn't appear to allow their manually-synced phones to sync with the CalDAV calendars on the server, only with local calendars on their Macs. Nor does Nokia's organiser software support CalDAV but rather SyncML for live networked syncing. I was wondering if there's a plugin for iCal Server that would provide a bridge to a SyncML service I could run on the server... or anything else that would work (preferably FOSS)!

    Read the article

  • How can I use dynamic routing with openvpn tunnels?

    - by pQd
    i'm thinking about using dynamic routing [ OSPF or RIP ] via OpenVPN tunnels. right now i have few offices connected in full mesh, but this is not scalable solution as we add more locations. i would like to avoid situation when plenty of internal traffic is affected if one of two vpn termination points that i plan to use is down. do you have similar configuration working in production? if so - what routing daemon did you use - quagga? something else? did you encounter any problems? thanks!

    Read the article

  • How do I set permissions structure for multiple users editing multiple sites in /var/www on Ubuntu 9

    - by Michael T. Smith
    I'm setting up an Ubuntu server that will have 3 or 4 VirtualHosts that I want users to be able to work in (add new files, edit old files, etc.). I currently plan on storing the sites in /var/www but wouldn't be opposed to moving it. I know how to add new users, I know how to add new groups. I'm unsure of the best way to handle users being only able to edit some sites. I read over the answers here in this question, so I was thinking I could setup a group and add users to that group, but then they'd all have essentially the same permissions. Am I just going to have to assign each user specific permissions? Or is there a better way of handling this? Added: I should also note, that I'll have each user login in via SSH/sFTP. The users would never need to do anything else on the server.

    Read the article

  • NetBeans 7.2 MinGW installing for OpenCV

    - by Gligorijevic
    i have installed minGW on my PC according to http://netbeans.org/community/releases/72/cpp-setup-instructions.html, and i have "restored defaults" using NetBeans 7.2 who has found all necessary files. But when I made test sample C++ app i got following error: c:/mingw/bin/../lib/gcc/mingw32/4.6.2/../../../../mingw32/bin/ld.exe: cannot find -ladvapi32 c:/mingw/bin/../lib/gcc/mingw32/4.6.2/../../../../mingw32/bin/ld.exe: cannot find -lshell32 c:/mingw/bin/../lib/gcc/mingw32/4.6.2/../../../../mingw32/bin/ld.exe: cannot find -luser32 c:/mingw/bin/../lib/gcc/mingw32/4.6.2/../../../../mingw32/bin/ld.exe: cannot find -lkernel32 collect2: ld returned 1 exit status make[2]: *** [dist/Debug/MinGW-Windows/welcome_1.exe] Error 1 make[1]: *** [.build-conf] Error 2 make: *** [.build-impl] Error 2 Can anyone give me a hand with installing openCV and minGW for NetBeans? generated Makefiles file goes like this: > # CMAKE generated file: DO NOT EDIT! > # Generated by "MinGW Makefiles" Generator, CMake Version 2.8 > > # Default target executed when no arguments are given to make. default_target: all .PHONY : default_target > > #============================================================================= > # Special targets provided by cmake. > > # Disable implicit rules so canonical targets will work. .SUFFIXES: > > # Remove some rules from gmake that .SUFFIXES does not remove. SUFFIXES = > > .SUFFIXES: .hpux_make_needs_suffix_list > > # Suppress display of executed commands. $(VERBOSE).SILENT: > > # A target that is always out of date. cmake_force: .PHONY : cmake_force > > #============================================================================= > # Set environment variables for the build. > > SHELL = cmd.exe > > # The CMake executable. CMAKE_COMMAND = "C:\Program Files (x86)\cmake-2.8.9-win32-x86\bin\cmake.exe" > > # The command to remove a file. RM = "C:\Program Files (x86)\cmake-2.8.9-win32-x86\bin\cmake.exe" -E remove -f > > # Escaping for special characters. EQUALS = = > > # The program to use to edit the cache. CMAKE_EDIT_COMMAND = "C:\Program Files (x86)\cmake-2.8.9-win32-x86\bin\cmake-gui.exe" > > # The top-level source directory on which CMake was run. CMAKE_SOURCE_DIR = C:\msys\1.0\src\opencv > > # The top-level build directory on which CMake was run. CMAKE_BINARY_DIR = C:\msys\1.0\src\opencv\build\mingw > > #============================================================================= > # Targets provided globally by CMake. > > # Special rule for the target edit_cache edit_cache: @$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --cyan > "Running CMake cache editor..." "C:\Program Files > (x86)\cmake-2.8.9-win32-x86\bin\cmake-gui.exe" -H$(CMAKE_SOURCE_DIR) > -B$(CMAKE_BINARY_DIR) .PHONY : edit_cache > > # Special rule for the target edit_cache edit_cache/fast: edit_cache .PHONY : edit_cache/fast

    Read the article

  • Performing Aggregate Functions on Multi-Million Row Tables

    - by Daniel Short
    I'm having some serious performance issues with a multi-million row table that I feel I should be able to get results from fairly quick. Here's a run down of what I have, how I'm querying it, and how long it's taking: I'm running SQL Server 2008 Standard, so Partitioning isn't currently an option I'm attempting to aggregate all views for all inventory for a specific account over the last 30 days. All views are stored in the following table: CREATE TABLE [dbo].[LogInvSearches_Daily]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [Inv_ID] [int] NOT NULL, [Site_ID] [int] NOT NULL, [LogCount] [int] NOT NULL, [LogDay] [smalldatetime] NOT NULL, CONSTRAINT [PK_LogInvSearches_Daily] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY] ) ON [PRIMARY] This table has 132,000,000 records, and is over 4 gigs. A sample of 10 rows from the table: ID Inv_ID Site_ID LogCount LogDay -------------------- ----------- ----------- ----------- ----------------------- 1 486752 48 14 2009-07-21 00:00:00 2 119314 51 16 2009-07-21 00:00:00 3 313678 48 25 2009-07-21 00:00:00 4 298863 0 1 2009-07-21 00:00:00 5 119996 0 2 2009-07-21 00:00:00 6 463777 534 7 2009-07-21 00:00:00 7 339976 503 2 2009-07-21 00:00:00 8 333501 570 4 2009-07-21 00:00:00 9 453955 0 12 2009-07-21 00:00:00 10 443291 0 4 2009-07-21 00:00:00 (10 row(s) affected) I have the following index on LogInvSearches_Daily: /****** Object: Index [IX_LogInvSearches_Daily_LogDay] Script Date: 05/12/2010 11:08:22 ******/ CREATE NONCLUSTERED INDEX [IX_LogInvSearches_Daily_LogDay] ON [dbo].[LogInvSearches_Daily] ( [LogDay] ASC ) INCLUDE ( [Inv_ID], [LogCount]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] I need to pull inventory only from the Inventory for a specific account id. I have an index on the Inventory as well. I'm using the following query to aggregate the data and give me the top 5 records. This query is currently taking 24 seconds to return the 5 rows: StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SELECT TOP 5 Sum(LogCount) AS Views , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , Inv_ID FROM LogInvSearches_Daily D (NOLOCK) WHERE LogDay DateAdd(d, -30, getdate()) AND EXISTS( SELECT NULL FROM propertyControlCenter.dbo.Inventory (NOLOCK) WHERE Acct_ID = 18731 AND Inv_ID = D.Inv_ID ) GROUP BY Inv_ID (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |--Top(TOP EXPRESSION:((5))) |--Sequence Project(DEFINE:([Expr1007]=dense_rank)) |--Segment |--Segment |--Sort(ORDER BY:([Expr1006] DESC, [D].[Inv_ID] DESC)) |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1006]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1011], [Expr1012], [Expr1010])) | |--Compute Scalar(DEFINE:(([Expr1011],[Expr1012],[Expr1010])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1011] AND [D].[LogDay] < [Expr1012]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID]), SEEK:([propertyControlCenter].[dbo].[Inventory].[Acct_ID]=(18731) AND [propertyControlCenter].[dbo].[Inventory].[Inv_ID]=[LOA (13 row(s) affected) I tried using a CTE to pick up the rows first and aggregate them, but that didn't run any faster, and gives me essentially the same execution plan. (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --SET SHOWPLAN_TEXT ON; WITH getSearches AS ( SELECT LogCount -- , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , D.Inv_ID FROM LogInvSearches_Daily D (NOLOCK) INNER JOIN propertyControlCenter.dbo.Inventory I (NOLOCK) ON Acct_ID = 18731 AND I.Inv_ID = D.Inv_ID WHERE LogDay DateAdd(d, -30, getdate()) -- GROUP BY Inv_ID ) SELECT Sum(LogCount) AS Views, Inv_ID FROM getSearches GROUP BY Inv_ID (1 row(s) affected) StmtText ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1004]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1008], [Expr1009], [Expr1007])) | |--Compute Scalar(DEFINE:(([Expr1008],[Expr1009],[Expr1007])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1008] AND [D].[LogDay] < [Expr1009]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID] AS [I]), SEEK:([I].[Acct_ID]=(18731) AND [I].[Inv_ID]=[LOALogs].[dbo].[LogInvSearches_Daily].[Inv_ID] as [D].[Inv_ID]) ORDERED FORWARD) (8 row(s) affected) (1 row(s) affected) So given that I'm getting good Index Seeks in my execution plan, what can I do to get this running faster? Thanks, Dan

    Read the article

  • How to move a windows machine properly from RAID 1 to raid 10?

    - by goober
    Goal I would like to add two more hard drives to my current RAID 1 setup and create a RAID 0 setup on top of the two RAID 1 setups (which I believe is referred to as "RAID 10"). Components Involved Intel P68 Chipset Motherboard 4 SATA ports that can be configured for Raid An intel SSD cache that sits in front of the RAID, and a 64 GB SSD configured in that manner Two 1TB HDDs configured in RAID 1 OS: Windows 7 Professional Resources Consulted so far I found a great resource on LinuxQuestions.org for a good "best practices" process for Linux machines, but I'd like to develop a similar process that I know works on Windows Machines.

    Read the article

  • Will I need a dedicated static IP or a unique IP is enough to SSL enable my website?

    - by Devner
    Hi, This is the first time I am dealing with SSL and Dedicated Static IP /Unique IP. Now this webhost says that they will provide Unique IP (not shared with other customers) but do NOT guarantee that it will be static. Now I plan to make my website SSL enabled and install a SSL certificate. So in order to SSL enable my website, will I really need a Dedicated Static IP or will this Unique IP (without the guarantee that it will be static) be enough? What problems will I need to face if the IP is not static? I have already bought hosting from them. And they showed me that option while adding optional services to the account (after I placed my order), so I did not even have a clue about this. Thank you all in advance.

    Read the article

  • AD Users outside the building

    - by gammaRED
    I've never had a customer ask me this, but they keep insisting if they have Active Directory and a Domain, that mobile[road warriors] users will not be able to login to their laptops if they are at home or away from the office. I told them that is would use "cache" creds to do this. Am I right or wrong? I've been told this and found a couple of forums saying the same thing. What is really going on and how are the laptops able to do this?

    Read the article

  • Recovering files that do not appear in the Recycle Bin, but are in the $Recycle.bin folder on external drive

    - by Zach Morgan
    Problem: I have an NTFS external drive with a $Recycle.bin folder on the root (E:/$Recycle.bin/) that has about 70gb worth of data. For whatever reason, the folder is no longer a hidden system file and no Windows machine I have used the drive on will show the files in the actual Recycle Bin. What I Want To Do: I want to atleast view the recycle bin files from this external, and all of the help articles I have read just talk about deleting the folder all together. I plan on reformating the drive, but first I need to see if there are any important deleted files. What Didn't Work: Recuva - didn't see any of my files Resetting the external's Recycle Bin via command prompt and moving the old $Recycle.bin files into the new external $Recycle.bin folder (I didn't read this anywhere, just made it up on my own)

    Read the article

  • Killing Self-resurrecting Files

    - by Zian Choy
    Local computer: Dell desktop Windows XP Named "5" Server: Windows Server 2008 Named "W" Whenever I delete a file, log out, and log in, the file reappears. The files all supposedly live on the server (e.g. "\W\Home$\zchoy\Desktop") but even when the files are deleted from the server and the local computer, they come back. I've already tried resetting the offline files cache. I tried deleting a file and then synchronizing with the server. The file didn't come back. However, as mentioned earlier, the file comes back once I log out and log in.

    Read the article

  • VPN only connects once and then fails

    - by Toby Allen
    I have a VPN connection setup to my office from home, it works great except for one thing. Once I connect and then disconnect the next time I go to connect to the vpn it hangs on verifying your username and password. No matter how many times I try. Once I restart my router it works again. My router is a belkin, the router on the other end is a Draytek. Any ideas? Is there a cache somewhere that needs to be reset, a setting?

    Read the article

  • How can I update fontconfig to a newer version in Red Hat 5.3?

    - by user16654
    I want to update fontconfig to a newer version but it seems that the OS is still finding the old fontconfig and I need the newer version to build qt. How do I make Red Hat 5.3 see the newer version? I do not know if this helps but when I did a search for fontconfig I found some files in a folder called cache. When I do yum update it tells me everything is up to date but that version is too old and is missing FcFreeTypeQueryFace. Just send me a comment if this is wrong site and ill change it.

    Read the article

  • Openfire and LDAP issues

    - by clsmith
    Thanks in advance for the help. Has anyone see this issue with openfire? Currently I use Openfire Fedora with Auth using windows 2003 and also use mysql for the database. When I bring up two clients and talk to each other the time is slow between messages. Sometimes it can take between 5-15 minutes for something sent to get to the person (this is with only two people on the openfire server). I ran a tcp dump using port 389 and see that the machine is running thousands of queries against ldap. When i plug it into wireshark I notice that it is transferring the entire contact list or checking on the status of the entire contact list ? When I run debug on openfire itself I am presented with only this small message in the log: 2010.06.08 07:01:17 LdapManager: Starting LDAP search... 2010.06.08 07:01:17 LdapManager: ... search finished 2010.06.08 07:01:17 LdapManager: Creating a DirContext in LdapManager.getContext()... 2010.06.08 07:01:17 LdapManager: Created hashtable with context values, attempting to create context... 2010.06.08 07:01:17 LdapManager: ... context created successfully, returning. 2010.06.08 07:01:17 LdapManager: Trying to find a groups's DN based on it's groupname. cn: Spark agents CLT, Base DN: OU="Hidden",DC="Hidden",DC="net"... 2010.06.08 07:01:17 LdapManager: Creating a DirContext in LdapManager.getContext()... 2010.06.08 07:01:17 LdapManager: Created hashtable with context values, attempting to create context... 2010.06.08 07:01:17 LdapManager: ... context created successfully, returning. 2010.06.08 07:01:17 LdapManager: Starting LDAP search... 2010.06.08 07:01:17 LdapManager: ... search finished 2010.06.08 07:01:17 LdapManager: Trying to find a groups's DN based on it's groupname. cn: Spark agents CLT, Base DN: OU="Hidden",DC="Hidden",DC="net"... 2010.06.08 07:01:17 LdapManager: Creating a DirContext in LdapManager.getContext()... 2010.06.08 07:01:17 LdapManager: Created hashtable with context values, attempting to create context... 2010.06.08 07:01:17 LdapManager: ... context created successfully, returning. 2010.06.08 07:01:17 LdapManager: Starting LDAP search... 2010.06.08 07:01:17 LdapManager: ... search finished I thought this was a configuration on my end and started to look into the cache settings on the openfire webpages. I tweaked the settings as recommend by the pages and still get the same issues. I doesnt seem to cache the contact list or this might be a feature never fixed or implemented. Has anyone gone through this before ? I have searched online and I see others have great experience with openfire with no issues like I have, or is it because noone checked the queries ? For the time being I created a new Domain Controller and moved openfire to that computer so it can run local queries. This seems to help reduce the speed alot, but when I run the server performance manager tool I see that with two people only using that openfire server I run 593.7 request per second. Thanks for your help, if I didnt provide enough data please let me know what you need and I can find it.

    Read the article

  • Office 365 E3 with Exchange Hosted Encryption (EHE)

    - by Stephen
    I hope this is the right forum for posting this question. I have a client who wants to move to Office 365. They are currently running on a trial of Office 365 E3 plan. My staff are now also using Office 365 E3 via the internal use licences provided as part of the MS Cloud Partner benefits. We've search high and low, spoken to about 15 different people at Office 365 Support, as well as my local distributor's MS Product Manager, but we cannot seem to find out exactly how to purchase/subscribe to the Exchange Hosted Encryption (EHE) service, or how to configure/use it from Office 365. Does anybody out there have any insight into how we can setup and use the EHE service? Thanks! Stephen

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >