Search Results

Search found 11051 results on 443 pages for 'group concat'.

Page 352/443 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Will disabling hyperthreading improve performance on our SQL Server install

    - by Sam Saffron
    Related to: Current wisdom on SQL Server and Hyperthreading Recently we upgraded our Windows 2008 R2 database server from an X5470 to a X5560. The theory is both CPUs have very similar performance, if anything the X5560 is slightly faster. However, SQL Server 2008 R2 performance has been pretty bad over the last day or so and CPU usage has been pretty high. Page life expectancy is massive, we are getting almost 100% cache hit for the pages, so memory is not a problem. When I ran: SELECT * FROM sys.dm_os_wait_stats order by signal_wait_time_ms desc I got: wait_type waiting_tasks_count wait_time_ms max_wait_time_ms signal_wait_time_ms ------------------------------------------------------------ -------------------- -------------------- -------------------- -------------------- XE_TIMER_EVENT 115166 2799125790 30165 2799125065 REQUEST_FOR_DEADLOCK_SEARCH 559393 2799053973 5180 2799053973 SOS_SCHEDULER_YIELD 152289883 189948844 960 189756877 CXPACKET 234638389 2383701040 141334 118796827 SLEEP_TASK 170743505 1525669557 1406 76485386 LATCH_EX 97301008 810738519 1107 55093884 LOGMGR_QUEUE 16525384 2798527632 20751319 4083713 WRITELOG 16850119 18328365 1193 2367880 PAGELATCH_EX 13254618 8524515 11263 1670113 ASYNC_NETWORK_IO 23954146 6981220 7110 1475699 (10 row(s) affected) I also ran -- Isolate top waits for server instance since last restart or statistics clear WITH Waits AS ( SELECT wait_type, wait_time_ms / 1000. AS [wait_time_s], 100. * wait_time_ms / SUM(wait_time_ms) OVER() AS [pct], ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS [rn] FROM sys.dm_os_wait_stats WHERE wait_type NOT IN ('CLR_SEMAPHORE','LAZYWRITER_SLEEP','RESOURCE_QUEUE', 'SLEEP_TASK','SLEEP_SYSTEMTASK','SQLTRACE_BUFFER_FLUSH','WAITFOR','LOGMGR_QUEUE', 'CHECKPOINT_QUEUE','REQUEST_FOR_DEADLOCK_SEARCH','XE_TIMER_EVENT','BROKER_TO_FLUSH', 'BROKER_TASK_STOP','CLR_MANUAL_EVENT','CLR_AUTO_EVENT','DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT','XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN')) SELECT W1.wait_type, CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s, CAST(W1.pct AS DECIMAL(12, 2)) AS pct, CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct HAVING SUM(W2.pct) - W1.pct < 95; -- percentage threshold And got wait_type wait_time_s pct running_pct CXPACKET 554821.66 65.82 65.82 LATCH_EX 184123.16 21.84 87.66 SOS_SCHEDULER_YIELD 37541.17 4.45 92.11 PAGEIOLATCH_SH 19018.53 2.26 94.37 FT_IFTSHC_MUTEX 14306.05 1.70 96.07 That shows huge amounts of time synchronizing queries involving parallelism (high CXPACKET). Additionally, anecdotally many of these problem queries are being executed on multiple cores (we have no MAXDOP hints anywhere in our code) The server has not been under load for more than a day or so. We are experiencing a large amount of variance with query executions, typically many queries appear to be slower that they were on our previous DB server and CPU is really high. Will disabling Hyperthreading help at reducing our CPU usage and increase throughput?

    Read the article

  • Mac OS X 10.6.3: how does Apache config work?

    - by w-
    Just got a MacBook Pro 15" so I'm unfamiliar with how the filesystem is laid out. I noticed when in my filesystem that I've got a few paths specifying httpd.conf: /etc/apache2/httpd.conf /opt/local/apache2/conf/httpd.conf /private/etc/apache2/httpd.conf The config files are different in lots of ways (user, group, server_root, modules that are loaded, etc.) The apache2 folders themselves also greatly differ. It seems that the one getting used is either /etc/apache2/httpd.conf or /private/etc/apache2/httpd.conf I'm wondering if I might have messed up my system after installing some packages (php5, django, etc) via macports and maybe ended up with 2 apache2 instances. My questions are hence: which httpd.conf is the one being used ? what are the other files for? thanks --update-- To clarify, I didn't explicitly install apache2 via macports. I'm wondering if it was installed because it was a dependency. After more hunting around I'm learning I never should've installed php to begin with because Snow Leopard already includes php 5.3 from the get go. http://serverfault.com/questions/82410/apache-2-and-php-5-3-via-macports I'll need to open another question that asks about how the Mac filesystem works. Thanks all for replies.

    Read the article

  • 'txn-current-lock': Permission denied [500, #13] - Subversion + Apache Configuration Issue

    - by wfoster
    Current Setup Fedora 13 32bit Apache 2.2.16 Subversion repositories setup under /var/www/svn I have two different repositories under this directory so my /etc/httpd/conf.d/subversion.conf setup in this way; LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so <Location /svn> DAV svn SVNListParentPath on SVNParentPath /var/www/svn <LimitExcept GET PROPFIND OPTIONS REPORT> AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/httpd/.htpasswd Require valid-user </LimitExcept> </Location> After copying over my repos and using; chmod 755 -R /var/www/svn chcon -R -t httpd_sys_content_t /var/www/svn chown apache:apache -R /var/www/svn I can browse my repos fine through the browser, and I can update all my working copies, however when I try to check in from anywhere I get the same error Can't open file '/var/www/svn/repo/db/txn-current-lock':Permission denied I have been working on this issue for a while now and cant seem to find a solution to my issues. It might be of some use to know that the repo existed on a different server before this, it has been now moved to this new server. Everything I have read seems to indicate that the permissions for apache are incorrect, however apache is set to run as User apache and Group apache. So as far as I can tell my setup is correct. The behavior is not though. Any Ideas? Solution The only way I was able to get this to work is to disable SELinux, it could also be done by setting the proper booleans with SELinux via setsetbool and getsebool since this is just a home server, I decided to disable SELinux and am reaping the benefits now.

    Read the article

  • SocketException (Timeout) only when running as scheduled task

    - by BVartin
    I'm running a C# web-scrapper application (that I wrote) on a Windows Server 2003 instance under a user belonging to the local Administrator group. When I run it within a desktop/remote-desktop session the application runs successfully but when I schedule it to run under the same user/security-context outside of the desktop session, all socket connections timeout. The scheduled task calls a batch file which in-turn calls the application. The Windows Server 2003 instance has a very basic configuration and isn't even connected to a domain. I cannot find anything in any firewall or security configuration which is preventing this but maybe I have overlooked something, can anyone be of any assistance? System.Net.WebException: Unable to connect to the remote server --- System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond X.X.X.X:443 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Int32 timeout, Exception& exception) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetResponse()

    Read the article

  • How to auto-cc a system email account any time a user creates an appointment

    - by Ferdy
    I will not bother explaining my full architecture or reasons for wanting this in order to keep this question short: Is it possible to auto-cc a certain email account any time a Exchange user creates an appointment or meeting in his own calendar? Is it possible using rules? Our Exchange 2007 server is outsourced, I cannot change the configuration or install plugins server-side Preferably, it still should work server-side, because users may use the Outlook client but also Outlook Web Access Is there any other way, perhaps using group policies? My conclusion so far is that the only viable way to accomplish this is to build an Outlook add-on. The problem there is that it will need to be managed for thousands of desktop users and that the add-on will not work when using another client (OWA, mobile). An alternative architecture could be to pull the information from the user's calendar on a scheduled basis. Given that we are talking about a lot of users, scalability is a major issue, this has also been confirmed by Microsoft. Can you confirm that my thinking is correct or do you have any other solutions?

    Read the article

  • 0 connected nodes in datastax opscenter

    - by gansbrest
    Installed opscenterd on the separate node outside of the cluster, but within firewall ( aws security group ). Tested all possible ports between agents and opcenter server. No errors in the log.. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Initializing event storage. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Attempting to load all persisted alert rules 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done loading persisted alert rules 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done initializing event storage. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done loading persisted scheduled job descriptions 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: OpsCenter starting up. 2013-10-30 01:07:23+0000 [] INFO: Finished starting new cluster services for FC_Cluster 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.34.10.185 is version u'3.2.2' 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.32.37.251 is version u'3.2.2' 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.82.226.252 is version u'3.2.2' The most interesting part that I can see some data in the opscenter UI, when I stop agents, there is no data displayed, when I start - it show up again, but at the same time it shows 0 connected nodes. Storage capacity is even funnier - 3 of 0 nodes.. Any ideas why that could be happening?

    Read the article

  • OpenVPN: ERROR: could not read Auth username from stdin

    - by user56231
    I managed to setup openvpn but now I want to integrate a user/pass authentication method so, even though I haven't added the auth-nocache in the server config, whenever I try to connect it returns with the following message on the client side: ERROR: could not read Auth username from stdin My server.conf file contains basic stuff, everything works up untill I try to implement this for of authentication. mode server dev tun proto tcp port 1194 keepalive 10 120 plugin /usr/lib/openvpn/openvpn-auth-pam.so login client-cert-not-required username-as-common-name auth-user-pass-verify /etc/openvpn/auth.pl via-env ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem user nobody group nogroup server 10.8.0.0 255.255.255.0 persist-key persist-tun #persist-local-ip status openvpn-status.log verb 3 client-to-client push "redirect-gateway def1" push "dhcp-option DNS 10.8.0.1" log-append /var/log/openvpn comp-lzo I searched all over the net for a solution and all answers seems to be related to the auth-nocache param which I haven't set. The directive auth-user-pass-verify /etc/openvpn/auth.pl via-env points to a script which is executed to perform the authentication. A false authentication should result in a exit 1 while a true one should result with exit 0. For testing, that script auth.pl returns exit 0 no matter what the input is but it seems that the file is not executed before the error raises. auth.pl file contents: #!/usr/bin/perl my $user = $ENV{username}; my $passwd = $ENV{password}; printf("$user : $passwd\n"); exit 0; Any ideas?

    Read the article

  • How do I deny access to everybody but me in Windows 7?

    - by GregH
    I am trying to set up a file server on my my Windows 7 Pro system at home. I set up one common "Share" folder that I have shared/published. Within the share folder I want to have individual folders for me and my wife...that is only I can read/write my folder and only my wife can read/write to her folder and neither of us can read the contents of the other person's folder. Then I want to have a "public" folder where we can both read/write to contents of the folder as well as any sub-folders created, but my "kids" account can only read from this folder and sub folders. It seems really confusing to set up something like this and it really shouldn't. I am really confused between the "allow", "deny", and dimmed check boxes in the security tab. It seems that if I "Deny" access to "Everyone" on my private folder, then I don't even have access to it. Windows security seems backwards from the rest of the world's security models. If I am in two groups and I deny access to one of the groups but allow access to the other group then Windows security denies me access as I am in one of the groups that has access disallowed. Very confusing.

    Read the article

  • Why Is ModSecurity Unable to Access the Data Directory?

    - by tommytwoeyes
    Update I think we've solved this; the problem appears to have been a result of the /modsec_storage directory having an incorrect value for its SELinux context type. However, we're still not sure, because although after I changed the SELinux context type value, Apache was able to create files in that directory for the global and ip collections (global.dir/global.pag and ip.dir/ip.pag), the new files still have zero bytes. I'm new to ModSecurity and am not sure if the files are empty because something is wrong with the configuration or if ModSecurity has simply determined it doesn't need to store IP addresses persistently after each transaction ends. Anyone able to offer guidance here? I've recently installed ModSecurity (v2.5.12 / CRS v2.0.8) on our production server, and everything works great, except for these errors that it keeps writing to the Apache error log: Failed to access DBM file "/modsec_storage/global": Permission denied [hostname "www.internationalstudent.com"] [uri "/includes/soc_bookmarks/images/delicious.png"] [unique_id "LZ6jc38AAAEAAFO6408AAABO"] Failed to access DBM file "/modsec_storage/ip": Permission denied [hostname "www.internationalstudent.com"] [uri "/includes/soc_bookmarks/images/delicious.png"] [unique_id "LZ6jc38AAAEAAFO6408AAABO"] After following the instructions for file permission settings in the ModSecurity handbook by Ivan Ristic, with no success, I created a /modsec_storage directory, set the owner & group to apache, and set the permissions for the directory recursively to 777. However, ModSecurity is still reporting the same permission errors, so I am stumped. Can anyone tell me how to fix this?

    Read the article

  • HaProxy - Http and SSL pass through config

    - by Bill
    I've currently got an HaProxy LB solution in place and everything is working fine however we are having an issue with a very few clients who cannot get to our site via HTTPS (SSL) they can browse our site in Http but as soon as they click on an absolute HTTPS link they are taken to our home page instead. Wondering if anyone can look at our config below and see if there's something awry. I believe we are on HaProxy 1.2.17 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 6144 #debug #quiet user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 stats auth # admin password stats uri /monitor listen webfarm # bind :80,:443 bind :443 mode tcp balance source #cookie SERVERID insert indirect #option httpclose #option forwardfor #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen webfarmhttp :80 mode http balance source # option httpclose option forwardfor # option httpchk HEAD /check.cfm HTTP/1.0 option httpchk /check.cfm server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen monitor :8443 mode http balance roundrobin #cookie SERVERID insert indirect option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 server webB 111.10.10.2

    Read the article

  • Samba Share - MS Excel when saved (can't access the file, there are several possible reasons)

    - by brain90
    Dear Fellow ServerFaulter, I have a weird problem in my samba share. I have one share definition for 3 client (A,B,C) This share contain some excel file which having a lot of formula and linked each other. Client A access the file with libre office (ubuntu), client B access with WinXP & MS Office 2003, The write and read process working successfuly on Both of them. The problem occur when client C accessing the same file with MS Excel 2003 (windows xp). This messagebox appear when he saving the file : Microsoft office excel cannot access the \\192.168.1.23\myshare\ There are several possible reasons: - The File ort path does not exist The file is being used by another program. - The workbook you are trying to save has the same name as a - Currently open workbooks. I was trying http://support.microsoft.com/kb/291204 but it didnt work. Below is my share definition : [brainshare] comment = brainshare path = /opt/brainshare/ valid users = @brainshare force group = brainshare read only = No create mask = 0775 veto files = /*.scr/*.eml/thumbs.com/ Help me please... Thanks in advance ! Server: Ubuntu 10.10, Samba version 3.5.4

    Read the article

  • ssh timeout issue connecting to an EC2 instance on OS X

    - by mamusr
    I am new to AWS and not a networking expert but curious to know more about it. I created a VPC with a public subnet only. Then i created an EC2 instance using an Ubuntu 14.04 64-bit pv AMI image (ami-e84d8480) as well generating the key pair needed to connect to it through ssh. I followed amazon's instructions to connect to an EC2 instance via ssh which did not work. Here is my attempted input and debug log: Running on OS X 10.9.4 user$ ssh -vvv -i key.pem [email protected] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 102: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to xxx.xxx.xxx.xxx [xxx.xxx.xxx.xxx] port 22. debug1: connect to address xxx.xxx.xxx.xxx port 22: Operation timed out ssh: connect to host xxx.xxx.xxx.xxx port 22: Operation timed out To attempt to resolve the issue: I enabled the SSH port. Tried different usernames other than ubuntu, like ec2-user and root. Initially set an inbound ssh rule in the security group to connect to only my ip address. When that did not work, i changed it to allow any ip to connect. But those actions did not fix the problem. Here are my guesses as to what i am missing in getting the EC2 instance connection to work. My etc/ssh_config file may be preventing the connection from taking place. I may have missed an important networking detail when setting up the VPC. I do not have a public ip address specified for the instance. I am connecting through the private ip address. My questions for the community: Am i going about it the wrong way connecting to the instance through the private ip address? if so, do i need to specify a public ip address for it to connect or some other method?

    Read the article

  • Remote Desktop access Windows 7 system from Windows 8

    - by Prabhat
    I have 2 systems; Windows 7 & Windows 8. Both are connected to WiFi router. They have been assigned address 192.168.2.8 & 192.168.2.9 respectively. I have added them to home group. I am able to ping and connect Windows 8 system from Windows 7. I am having trouble connecting Windows 7 system from Windows 8 system. I can't even ping Windows 7 system. Windows 7 system's user is administrator (default administrator account from secpol.msc). File sharing, Remote Access, network discovery are all enabled. Someone please help me connect. EDIT : I found that this is the issue of Kaspersky Internet Security 2012. If I disable firewall, it works. I tried opening port 3389 in Kaspersky. It is still blocking access.

    Read the article

  • How to disable auto insert notification in Windows 7?

    - by White Phoenix
    Alright, here's the problem. My hard drive activity light on my custom built PC is blinking exactly once every second. Microsoft has this to say on the issue: http://support.microsoft.com/kb/138598 There has been discussion on this issue several months ago: Why does my hard drive LED light blink every second? The problem seems to stem from primarily Windows 7 polling the CD-ROM/DVD drive every second to see if something is inserted. The Windows 7 users in the thread that was linked in the superuser question, https://social.technet.microsoft.com/Forums/fi-FI/w7itprohardware/thread/4f6f63b3-4b58-4154-9298-1566100f9d00, have confirmed that this IS a known issue with Windows 7. Some people point at the motherboard circuitry causing the CD-ROM and SATA activity to both be linked to that hard drive activity, but whatever the case, the temporary solution seems to be to disable the CD/DVD-ROM drive in Device Manager. In fact, disabling the CD/DVD-ROM does stop the blinking, but of course this solution is counterproductive, because I shouldn't have to entirely disable a device to fix this problem. I've done the following suggestions in that thread: Change the autorun registry entry to 0 Completely disable autoplay in the autoplay control panel Disable autoplay in the Local Group Policy Editor. None of these stop the blinking from happening - apparently these solutions work for both XP and Vista, but it seems to be different in Windows 7. So I'm wondering if anyone has found out how to completely disable the polling in Windows 7, or if this will just have to be an issue we will have to deal with. There's no option to disable the auto insert notification when you go to the device within device manager (there was in XP), so I got no idea where this option is hidden, or if there's a registry key entry I could change to stop the polling. Anyone have any idea?

    Read the article

  • Repairing inconsistent pages in database

    - by Raj
    We have a SQL 2000 DB. The server crashed due to Raid array failure. Now when we run DBCC CHECKDB, we get an error that there are 27 consistency errors in 9 pages. When we run DBCC PAGE on these pages, we get this: Msg 8939, Level 16, State 106, Line 1 Table error: Object ID 1397580017, index ID 2, page (1:8404521). Test (m_freeCnt == freeCnt) failed. Values are 2 and 19. Msg 8939, Level 16, State 108, Line 1 Table error: Object ID 1397580017, index ID 2, page (1:8404521). Test (emptySlotCnt == 0) failed. Values are 1 and 0. Since the indicated index is non-clustered and is created by a unique constarint that includes 2 columns, we tried dropping and recreating the index. This resulted in the following error: CREATE UNIQUE INDEX terminated because a duplicate key was found for index ID 2. Most significant primary key is '3280'. The statement has been terminated. However running Select var_id,result_on from tests group by var_id,result_on having count(*)>1 returns 0 rows. Here is what we are planning to do: Restore a pre-server crash copy of the DB and run DBCC CHECKDB If that returns clean, then restore again with no recovery Apply all subequent TLOG backups Stop production app, take a tail log backup and apply that too Drop prod DB and rename the freshly restored DB to make it prod Start prod app Could someone please punch holes in this approach? Maybe, suggest a different approach? What we need is minimum downtime. SQL 2000 DB Size 94 GB The table that has corrupt pages has 460 Million+ rows of data Thanks for the help. Raj

    Read the article

  • How do you install .net4 on a Server 2008 r2 machine through psremoting in powershell?

    - by Jake
    I need to write a script that installs .net 4 remotely using powershell to a group of Server 2008 R2 machines. I based my script off of http://social.technet.microsoft.com/Forums/en-US/winserverpowershell/thread/3045eb24-7739-4695-ae94-5aa7052119fd/. enter-pssession -computername localhost $arglist = "/q /norestart /log C:\Users\tempuser\Desktop\dotnetfx4" $filepath = "C:\Users\tempuser\Desktop\dotNetFx40_Full_setup.exe" Start-Process -FilePath $filepath -ArgumentList $arglist -Wait -PassThru After running the command I would get the following log errors (running the same lines locally would install .net without error): Action: Downloading Item Failed to CreateJob : hr= 0x80200014 Action: Performing actions on all Items Action: Performing Action on Exe at C:\Users\tempuser\Desktop\dotnetfx4\SetupUtility.exe Exe (C:\Users\tempuser\Desktop\dotnetfx4\SetupUtility.exe) succeeded. Exe Log File: dd_SetupUtility.txt Action complete Action: ServiceControl - Stop clr_optimization_v2.0.50727_32 ServiceControl operation succeeded! Action complete Action: ServiceControl - Stop clr_optimization_v2.0.50727_64 ServiceControl operation succeeded! Action complete Action: Performing Action on Exe at C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\Windows6.1-KB958488-v6001-x64.msu Exe (C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\Windows6.1-KB958488-v6001-x64.msu) failed with 0x5 - Access is denied. . PerformOperation on exe returned exit code 5 (translates to HRESULT = 0x5) Action complete OnFailureBehavior for this item is to Rollback. Action: Performing actions on all Items Action complete Action complete Action: Downloading http://go.microsoft.com/fwlink/?LinkId=164184&clcid=0x409 using WinHttp WinHttpDetectAutoProxyConfigUrl failed with error: 12180 Unable to retrieve Proxy information although WinHttpGetIEProxyConfigForCurrentUser called succeeded Action complete C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\TMPF279.tmp.exe: Verifying signature for netfx_Core.mzz C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\TMPF279.tmp.exe Signature verified successfully for netfx_Core.mzz Action complete Decompression completed with code: 16389 Decompression of payload failed: C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\netfx_Core.mzz Action complete Final Result: Installation failed with error code: (0x80074005) (Elapsed time: 0 00:00:28). Is there some security setting or perhaps something else I've missed?

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    I submitted this to stack overflow (here) but realised it should really be on serverfault. so apologies for the incorrect and duplicate posting: Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Port translation in router causing some email to fail

    - by user22037
    We are in the process of setting up a spam filter (SAVASM). One change we are making is to push incoming email on port 25 through our spam filter/server but have users actually send their email on a different port. I am attempting to make this happen by using port address translation to send port 25 traffic to the SAVASM server IP. As a step in making this change I setup port translation without actually changing the IP addresses. The NAT rules for the email server went from one Static NAT rule with no port specified, to multiple Static NAT rules each with a port or group matching the Access Rules for that server (smtp, pop3, http, https, and some other custom ports). The problem we are running into is confusing. Some outgoing mail through this server is failing when the router has the multiple NAT rules with port translation settings. Email goes through fine FROM our email to our internal accounts and to Gmail. However email fails when FROM our client's email address TO our client's email or their personal Comcast. The only situation that worked for them was if they changed FROM to Comcast and then messages went through fine to both Comcast and the client's accounts. Switching back to regular Static NAT rule everything then worked for them. Does anyone have a clue as to what might be going on? We are on a Cisco ASA 5500 box.

    Read the article

  • How Could My Website Be Hacked

    - by Kiewic
    Hi! I wonder how this could happen. Someone delete my index.php files from all my domains and puts his own index.php files with the next message: Hacked by Z4i0n - Fatal Error - 2009 [Fatal Error Group Br] Site desfigurado por Z4i0n Somos: Elemento_pcx - s4r4d0 - Z4i0n - Belive Gr33tz: W4n73d - M4v3rick - Observing - MLK - l3nd4 - Soul_Fly 2009 My domain has many subdomains, but only the subdomains that can be accessed with an specific user were hacked, the rest weren't affected. I assumed that someone entered through SSH, because some of these subdomains are empty and Google doesn't know about them. But I checked the access log using the last command, but this didn't show any activity through SSH or FTP the day of the attack neither seven days before. Does anybody has an idea? I already changed my passwords. What do you recommend me to do? UPDATE My website is hosted at Dreamhost. I suppose they have the latest patches installed. But, while I was looking how they entered to my server, I found weird things. In one of my subdomains, there were many scripts for execute commands on the server, upload files, send mass emails and display compromising information. These files had been created since last December!! I have deleted those files and I'm looking for more malicious files. Maybe the security hold is an old and forgotten PHP application. This application has a file upload form protected by a password system based on sessions. One of the malicious scripts was in the uploads directory. This doesn't seem like an SQL Injection attack. Thanks for your help.

    Read the article

  • Wordpress Directory Permission to allow uploads, plugin folders, etc

    - by user1015958
    I have a wordpress pre-made site which were developed on my localmachine, and i uploaded it too a vps running on debian6, using nginx, mysql, php. Following this guide: 1) Create an unprivilaged user, this could be say 'karl' or whatever, and make them belong to the www-data group. So that if I were to login as karl and create a web root in say /home/karl/www/ , all the files will be owned by karl:www-data 2) Set up nginx as the user www-data in nginx.conf 3) Set up PHP-FPM to run as www-data 4) Place your files in /home/karl/www/[domain name maybe]/public_html/, upload as 'karl' so you don't have to chown everything again. when i type ls -l inside public_html/ it shows that all the files inside are owned by karl:karl. But the public_html directory is owned by karl:www-data. I chmod 0755 the folder wp-content but i still get the error: ERROR: Path ../wp-content/connection_images does not seem to be writeable. I know i shouldn't set it too 777 due to security reason, how should i set it too proper permission? and what should i set also to allow my users to upload,write posts,edit articles? Sorry for my english by the way.

    Read the article

  • "Unable to initialize module" fileinfo php-pecl-Fileinfo.x86_64

    - by Myers Network
    I have a brand new server server that I am trying to get setup up. This is a 64 bit machine that I can not install "fileinfo" or "memcache". I have uninstalled these and reinstalled them using yum and pecl with no luck. Yum install fine "no error" but then get error when running php. pecl from what I can tell is only installing 32bit. Does not put anything in the lib64 directory. Here is my output from php -v: PHP Warning: PHP Startup: fileinfo: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: memcache: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP 5.2.14 (cli) (built: Aug 12 2010 16:03:48) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2010 Zend Technologies Here is some other system info incase you need it uname: Linux server.actham.us 2.6.18-194.26.1.el5 #1 SMP Tue Nov 9 12:54:20 EST 2010 x86_64 x86_64 x86_64 GNU/Linux php -m: PHP Warning: PHP Startup: fileinfo: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: memcache: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 [PHP Modules] bz2 calendar ctype curl date dbase dom exif filter ftp gd gettext gmp hash iconv imap json ldap libxml mbstring mcrypt mysql mysqli openssl pcntl pcre PDO pdo_mysql pdo_sqlite readline Reflection session shmop SimpleXML sockets SPL standard tokenizer wddx xml xmlreader xmlrpc xmlwriter xsl zip zlib [Zend Modules] Any help would be greatly appreciated, thanks....

    Read the article

  • Error when attempting to do a differential or incremental backup of Exchange using ntbackup

    - by voon
    Hi folks, We're running Small Business Server 2003 here. I was reviewing our backup procedures lately and noticed in the ntbackup logs that the differential backups of Exchange were failing with the error: (SERVERNAME)\Microsoft Information Store\First Storage Group is not a valid drive, or you do not have access. A quick search of google found this MS KB article: http://support.microsoft.com/kb/555613 However, both of the suggested fixes don't to apply to our problem. First solution is to make sure the backup media is formatted and has adequate space. Well, our backup target is a 1 TB external hard drive with about 600 gigs of free space. (A full backup of our Exchange DB is currently around 5 GB) The second suggested fix is to "perform a full backup before trying to do incremental". And again, that can't it because we are doing full backups twice a week. There are no errors in the application log, just entries for ntbackup starting and ending. I've also tested doing an differential & incremental backup onto the server's internal drive, which unsurprising still did not work. I could get around this problem by always doing a full backup of Exchange but I kind of like the idea of being space efficient with doing differential backups. Anyone got any ideas?

    Read the article

  • Exchange users moved mailbox now can't open some calendars

    - by Kip
    OK So the environment is Exchange sp on Windows 2003 server. This weekend we had to move a bunch of users of off one information store that was corrupt and onto a temp store delete the original dodgy store and then move the users back from the temp store to one of the three other stores under the same original storage group. Since then we are having some weird access issues relating to calendars. I am assuming it is all related, but it might not be. The problem is that users are unable to see any calendars that they have previously had access to. The weird thing is, that some of the users in question are not ones who have been moved nor are they trying to access calendars that belong to people whose accounts have been moved. Hence my assumption its related but possibly not. The message received is "Unable to display the folder. The calendar folder could not be found." here is the kicker, if i move someone who is trying to access other calendars, to a different mailbox store (thereby creating a new email account and sending stuff over), things start to work again. this to me indicates a permissions problem however I am unsure in what way. Looking for help out there please guys :) Cheers

    Read the article

  • Windows 7 - How to access my documents from Windows 8 (dual boot)

    - by msbg
    I am dual booting Windows 7 and Windows 8 on two different partitions of the same drive: Win7: (C:) Win8: (D:) I am trying to get access to my Win7 user folder (C:\Users\Mason) in order to access my Win7 documents folder (C:\Users\Mason\Documents) from Windows 8. When I try to on Windows 8, I get an error message saying "You don't have permission to access this folder. Click here to permanently get access to this folder". When I click, the progress bar in Windows Explorer slowly moves to the maximum and disappears. When I try opening the folder, I get the same error message. When editing security permissions for the folder in Windows 8, Explorer freezes. I do not know how to remove the restrictions from Windows 7. I checked the Windows 8 user folder (D:\Users\Mason) and it had the group or user name: "S-1-5-21-936898901-3363470404-1273668825-1001". I tried copying and pasting it into the Win7 User Folder Permissions, but got the error "An object with the following name cannot be found". How would I access my folders?

    Read the article

  • Request bursting from web application Load Tests

    - by MaseBase
    I'm migrating our web and database hosting to a new environment on all new machines. I've recently performed a Load Test using WAPT to generate load from multiple distributed clients. The server has plenty of room to handle the traffic load, but I'm seeing an odd pattern of incoming traffic during the load tests. Here is the gist of our setup: Firewall server running MS Forefront TMG 2010 on Win 2k8 server Request routing done by IIS Application Request Routing on firewall machine Web server is a Hyper-V VM on the Database server (which is the host OS) These machines are hefty with dual-CPU's with six cores (12 total procs) Web server running IIS 7.5 Web applications built in ASP.NET 2.0, with 1 ISAPI filter (Url Rewrite) in front What I'm seeing during the load tests is that the requests all come through in bursts. Even though I have 7 different distributed clients sending traffic loads, the requests come through about 300-500 requests at a time. The performance monitor shows nearly all of the counters moving through this pattern, where a burst of requests comes in the req/sec jumps to 70, the queued requests jumps to 500, the current requests jumps up, the CPU jumps up, everything. Then once it's handled that group of requests, it has a lull for nearly 10 seconds where nearly nothing is happening. 0-5 req/sec, 0 queued requests, minimal CPU usage. Then after 10 seconds of inactivity, another burst comes through, spiking all of the counters once again. What I can't figure out is why the requests are coming through in bursts when I know that the load being generated is not sent that way, especially considering the various load-generating clients sending traffic all in different intervals with random think time's between each request. Is there something in the layers between Hyper-V or perhaps in the hardware which might cause this coalesce of requests together? Here is what i'm looking at, the highlighted metric is Requests/sec, but the others critical counter go with it: Requests Queued (which I'd obviously like to keep as close to 0 as possible). Any ideas on this?

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >