Search Results

Search found 22139 results on 886 pages for 'security testing'.

Page 703/886 | < Previous Page | 699 700 701 702 703 704 705 706 707 708 709 710  | Next Page >

  • SSD, AHCI and write performance

    - by Dan
    We've started to deploy SSD drives to our developers workstations. At this moment we're having the unpleasant surprise that the systems using the new SSDs often freeze, with the HDD activity led blinking or being continuously on. Benchmarks shows read speeds around 180 MB/s, but write speeds around 5 MB/s. All developers are using Windows 7 Enterprise, 64 bit, SP1. One of our developers suggested (based on his experience) the following sequence: backup the workstation use a tool to completely erase the SSD make sure AHCI is enabled in BIOS install Windows restore from backup So far, this procedure seems to work (we're still testing, but write speed seems to be 120 MB/s). There are some questions in this context: why do we have to completely reinstall Windows? Is it possible to clean the SSD without reinstalling Windows? Is there a reliable tool? If AHCI was disabled when Windows was installed and we enable it, shouldn't this be enough to correct the write performance issue? If we have to completely erase the SSDs, does this mean the SSDs we've received were used before (SH)? I'm wondering this because the package I've got was open (I didn't think about it at that time, as I considered one of my coworkers simply took a peek inside the package). Has anyone seen a similar problem before?

    Read the article

  • GlusterFS on VMWare ESXi 5

    - by Dharmavir
    I want to build network file system on top of my VMWare ESXi based virtual nodes which are running Ubuntu 12.04 LTS. I am evalaluating options and found that GlusterFS (http://www.gluster.org/) can turn out to be a good choice. Purpose: I have about 2 dozen VM nodes with different configurations, on 2 physical nodes which has following configuration: 16 core Intel Xeon 1 TB 48 GB RAM Now as I said earlier each Physical server has about 1TB hdd and I can increase if I want additional so for now I have 2TB disk space available, these space is distributed in VM nodes I have created on which about 2 dozen VM nodes live. Now some of them being application server and mgmt server, they have plenty of free disk space which I want to utilize for some heavy storage which I can not design if I do that individually on single VM node. This way if my storage is distributed between dozens of VM nodes and about 2 or more physical nodes I have some sort of backup as well. I do not mind if data gets stored redundently but per my knowledge it might hapeen that individual VM nodes will not be able to store all of the data because complete data size for example if we take 100GB will exceed VM disk size of 70GB and then VM will also have system and program files on it. I need some suggestion that will GlusterFS be the solution for which I am looking forward to or I should go with something like hadoop? I am not too sure. But yes, I would like to utilize my free space on each VM node and while doing that if I get store data redundently I am okay because it will give me data security.

    Read the article

  • Controller Error: Do I need to worry?

    - by Kryten
    Hi, I have a HP Pavillion dv5224ea Laptop with Windows 7 on it. Recently I discovered a Error in Event Viewer: The driver detected a controller error on \Device\Ide\IdePort1. (more details): - System - Provider [ Name] atapi - EventID 11 [ Qualifiers] 49156 Level 2 Task 0 Keywords 0x80000000000000 - TimeCreated [ SystemTime] 2010-03-07T12:43:07.090197600Z EventRecordID 30198 Channel System Computer Alistair-Win7 Security - EventData \Device\Ide\IdePort1 0000100001000000000000000B0004C002000000850100C00000000000000000000000000000000000000000000000000000000004100000 -------------------------------------------------------------------------------- Binary data: In Words 0000: 00100000 00000001 00000000 C004000B 0008: 00000002 C0000185 00000000 00000000 0010: 00000000 00000000 00000000 00000000 0018: 00000000 00001004 In Bytes 0000: 00 00 10 00 01 00 00 00 ........ 0008: 00 00 00 00 0B 00 04 C0 .......À 0010: 02 00 00 00 85 01 00 C0 ......À 0018: 00 00 00 00 00 00 00 00 ........ 0020: 00 00 00 00 00 00 00 00 ........ 0028: 00 00 00 00 00 00 00 00 ........ 0030: 00 00 00 00 04 10 00 00 ........ Event Viewer is recording A LOT of these errors (sometimes 13, one after the other!). Do I need to worry? What does this error mean? What device could "\Device\Ide\IdePort1" be? What is a ATAPI Error? Do I need to re-install Windows? I generally find the occurs when I try to backup my machine (using Windows Backup) or when using a program that uses Volume Shadow Copy. I have run "sfc", no problems. There are no Device Errors in Device Manager. I have also run "vssadmin list writers", no problems. Whats going on??? Would it be a good idea to re-install Windows 7?

    Read the article

  • Prolific USB-to-Serial Comm Port significantly slower under Windows 7 comparing to Windows XP

    - by Dmitry S
    Not sure if this question should be asked here or on SuperUser but if we get an answer here it may be useful for others here I am using a Prolific USB-to-Serial adapter based on the Prolific chip to use with a device on serial port. I have the latest version of the driver installed: 1.3.0 (2010-7-15). When I use my device with this adapter on my main Windows 7 (32bit) system it takes 8-9 seconds to send a command through to the device. However, when I do the same thing on a different Windows XP system (an old laptop I borrowed for testing) it only takes 2-3 seconds. I have made sure that the port settings and other variables are the same between systems. I also tested on a third laptop (also running Windows 7) and again got a significant delay. So the question is if anyone else experienced the same problem and found a solution. I would like to avoid moving to an XP system for what I need to achieve so that's my last option. Thanks in advance.

    Read the article

  • ISA 2006 SP1 - SSL Client Certificate Authentication in Workgroup Environment

    - by JoshODBrown
    We have an IIS6 website that was previously published using an ISA 2006 SP1 standard server publishing rule. In IIS we had required a client certificate be provided before the website could be accessed... this all worked fine and dandy. Now we wish to use a web publishing rule on ISA 2006 SP1 for this same website. However, it seems the client certificate doesn't get processed now, so of course the user can't access the website. I've read a few articles stating the CA for the certificate needs to be installed in the trusted root certificate authorities store on the ISA Server (i have done this), as well as installing the client certificate on the ISA Server (done as well). I have also verified that the ISA Server is able to access the CRL for our CA no problem... In the listener properties for the web publishing rule, under Authentication, and Client Authentication Method, there is an option for SSL Client Certificate Authentication... i select this, but it appears the only Authentication Validation Method selectable is Windows (Active Directory).... there is no Active Directory in this environment. When i configure the rule with the defaults, I then try to hit my website and it prompts for my certificate, i choose it and hit ok... then I'm given the following error Error Code: 500 Internal Server Error. The server denied the specified Uniform Resource Locator (URL). Contact the server administrator. (12202) I check the event logs on the ISA Server and in Security Logs, i see Event ID 536, Failure Aud. The reason: The NetLogon component is not active. I think this is pretty obvious since there is no active directory available. Is there a way to make this web publishing rule work using client certificates in this workgroup environment? Any suggestions or links to helpful documents would be greatly appreciated!

    Read the article

  • I can't connect new Windows 7 PC to Mac iBook with OS 10.3.9

    - by Jeff Humm
    Help ! I have an old iBook wired to a router and a new PC linking wirelessly to same router. On the Mac I have 'seen' the PC but not been able to connect to it. On the PC, the Network and Sharing Centre lists 'IBOOK'. When I click on this, 'Windows Security' asks me to 'Enter Network Password', asking for User name and password. I have tried: 1) The user name and password of my admin account on the iBook. This returns a 'logon failure' message but lists the user name as [NAME_OF_PC\User Name], suggesting it was looking for the user name of the PC, not the Mac. 2) The user name and password of my account on the PC. This also returns a 'logon failure' message. 3) The user name of my account on the PC and the 'homegroup password' given to me by Windows when setting up the PC. This also returns a 'logon failure' message. Today I've tried connecting the two machines via a patch cable - still no joy. Can anyone help? It is 20 years since I wrestled with any OS other than Mac, and 10 years since I've done mich wrangling with the Macs, so please assume no knowledge! Thanks in advance,

    Read the article

  • Wordpress Directory Permission to allow uploads, plugin folders, etc

    - by user1015958
    I have a wordpress pre-made site which were developed on my localmachine, and i uploaded it too a vps running on debian6, using nginx, mysql, php. Following this guide: 1) Create an unprivilaged user, this could be say 'karl' or whatever, and make them belong to the www-data group. So that if I were to login as karl and create a web root in say /home/karl/www/ , all the files will be owned by karl:www-data 2) Set up nginx as the user www-data in nginx.conf 3) Set up PHP-FPM to run as www-data 4) Place your files in /home/karl/www/[domain name maybe]/public_html/, upload as 'karl' so you don't have to chown everything again. when i type ls -l inside public_html/ it shows that all the files inside are owned by karl:karl. But the public_html directory is owned by karl:www-data. I chmod 0755 the folder wp-content but i still get the error: ERROR: Path ../wp-content/connection_images does not seem to be writeable. I know i shouldn't set it too 777 due to security reason, how should i set it too proper permission? and what should i set also to allow my users to upload,write posts,edit articles? Sorry for my english by the way.

    Read the article

  • Samba and Windows 7

    - by John Gaughan
    I built a new computer with the intention of it being primarily a home file server. Here is my setup: one desktop with Windows 7 64 HP one laptop with Windows 7 64 HP one desktop with Kubuntu 11.10 (server) The two desktops use static IPs, and I have hostnames mapped in the HOSTS files on all three systems. I have the same username/password combo on all three systems. I have been trying for a while now to set up Samba so the Windows 7 systems can see and use it. Even if I can get the server to show up, Windows is unable to log in. One of the first things I did was to enable LMv2 authentication, which this version of Samba (3.5.11) supports. The workgroup is set correctly. I can normally see the server, but cannot authenticate. Windows homegroup is turned off. Pinging between machines works fine, and the two Windows 7 systems work together flawlessly. What I am trying to do is set up Samba to use peer to peer networking using NTLM security and user-mode authentication. According to the documentation this is possible, but there are no examples that I could find. In all the googling I have done, I see a lot of people asking how to set this up but it either works for someone else and not for me (no idea what I'm missing), or it doesn't work. Has anyone gotten this to work? Is there a place I could download a smb.conf that is set up to work in this environment?

    Read the article

  • Authenticating Linked Servers - SQL Server 8 to SQL Server 10

    - by jp2code
    We have an old SQL Server 2000 database that has to be kept because it is needed on our manufacturing machines. It also maintains our employee records, since they are needed on these machines for employee logins. We also have a newer SQL Server 10 database (I think this is 2008, but I'm not sure) that we are using for newer development. I have recently learned (i.e. today) that I can link the two servers. This would allow me to access the employee tables in the newer server. Following the SF post SQL Server to SQL Server Linked Server Setup, I tried adding the link. In our SQL Server 2000 machine, I got this error: Similarly, on our SQL Server 10 machine, I got this error: The messages, though worded different, probably say the same thing: I need to authenticate, somehow. We have an Active Directory, but it is on yet another server. What, exactly, should be done here? A guy HERE<< said to check the Security settings, but did not say what else to do. Both servers are set to SQL Server and Windows Authentication mode. Now what?

    Read the article

  • Tuning Windows 7 for use in a VM

    - by intuited
    I'm running Windows 7 in a VirtualBox Virtual Machine, and would like to make it run in a more streamlined fashion. I'll be using the install primarily for testing web apps, and have no need for it to run quickly. I would like it to run with minimal memory requirements, and with minimal changes to its virtual hard drive's contents. Changes to the hard drive contents, for example the paging file, result in larger snapshot sizes. Another recent post of mine seems to be related to this issue, but does not directly address issues with Windows. One concern that I have is that Windows seems to be using 17% of its paging file even with over 900MB of memory marked "Standby" or "Free". My uneducated guess is that this is being used to store indexes or some other data that helps to speed up the system but is not really necessary. I'm also wondering if it's normal for Windows to use over 500 MB of "In Use" memory with no apps running. Will this amount decrease if I reduce the amount of "installed" memory in the VM? What steps can I take to reduce the system's memory footprint without incurring an increase in paging file usage?

    Read the article

  • Access Denied / Server 2008 / Home Directories

    - by Shaun Murphy
    Domain Controller: BDC01 (192.168.9.2) Storage Server: BrightonSAN1 (192.168.9.3) Domain: brighton.local Last night I moved our users home directories off of our Domain Controller onto a storage server using the MS FSMT. I'm getting a mixed bag of errors. The first being some users cannot logon properly, they can't access the logon.vbs in the sysvol folder on the DC and consequently cannot map their drives. I've narrowed that down to a DNS issue as we there was a remnant of our previous DNS server in the DHCP server options and scope options. I'm able to get their drives remapped by browsing to the sysvol folder by IP address as opposed to Computer Name and manually running the logon.vbs script. The other error I'm getting is Access Denied on a few of the users home directories. The top level folder (Home) is shared as normal and I've removed and re-added the NTFS security a number of times now including making the user the owner with full control. I've checked each and every individual file and folder in said users home directory and they are indeed the owner but I'm unable to write but I can read the contents. I'm stumped. This isn't happening to all clients. I'm considering removing their AD accounts, backing up their folders and readding them as a last resort but obviously I'd like to know why the above errors are happening.

    Read the article

  • Hardening non-root standalone Linux Tomcat install

    - by NoozNooz42
    I want to know if you have any tips as to how to strengthen the security of a non-root install of Tomcat in standalone mode once Tomcat is already installed in a non-root account, in standalone mode. I precise this because, for example, I'm not at all interested by the answers given here (because both Java and Tomcat requires root priviledges there to be installed and I've got zero interest in running jsvc): http://serverfault.com/questions/43765 So far, here's what I've done for my non-root standalone Tomcat 6 install: download and install the JRE .bin provided by Oracle/Sun (no need to be root here) (no need for a full JDK anymore right seen that Jasper [Tomcat's JSP engine] has its own compiler now right?) download and tar -xzf tomcat 6 (no need to be root here) set up transparent port-forwarding (must be root here) Note that my distribution is a Debian one and I have exactly zero interest in downloading Debian package / backports / whatever... Because, once again, I DO NOT want to need to be root to install Java & Tomcat. The only moment I needed to be root was to configure the firewall to transparently do the port forwarding 80 <-- 8080 and 443 <-- 8443. I then deleted all the default webapps but one: cd ~/apache-tomcat-6.0.26/webapps rm -rf docs rm -rf examples/ rm -rf manager/ rm -rf ROOT/ What about the directory ~/apache-tomcat-6.0.26/webapps/host-manager, do I need it or can I delete it? So, once I've installed Tomcat standalone in a non-root account (and taken into account that I don't want to enter the root password anymore and that I don't plan to install the whole Apache shebang), what more can I do? Are there connectors I can disable? (how?)

    Read the article

  • What breaks in a Windows domain if a member has a high time skew?

    - by Ryan Ries
    It's taken for granted by most IT people that in a Windows domain, if a member server's clock is off by more than 5 minutes (or however many minutes you've configured it for) from that of its domain controller - logons and authentications will fail. But that is not necessarily true. At least not for all authentication processes on all versions of Windows. For instance, I can set my time on my Windows 7 client to be skewed all to heck - logoff/logon still works fine. What happens is that my client sends an AS_REQ (with his time stamp) to the domain controller, and the DC responds with KRB_AP_ERR_SKEW. But the magic is that when the DC responds with the aforementioned Kerberos error, the DC also includes his time stamp, which the client in turn uses to adjust his own time and resubmits the AS_REQ, which is then approved. This behavior is not considered a security threat because encryption and secrets are still being used in the communication. This is also not just a Microsoft thing. RFC 4430 describes this behavior. So my question is does anyone know when this changed? And why is it that other things fail? For instance, Office Communicator kicks me off if my clock starts drifting too far out. I really wish to have more detail on this. edit: Here's the bit from RFC 4430 that I'm talking about: If the server clock and the client clock are off by more than the policy-determined clock skew limit (usually 5 minutes), the server MUST return a KRB_AP_ERR_SKEW. The optional client's time in the KRB-ERROR SHOULD be filled out. If the server protects the error by adding the Cksum field and returning the correct client's time, the client SHOULD compute the difference (in seconds) between the two clocks based upon the client and server time contained in the KRB-ERROR message. The client SHOULD store this clock difference and use it to adjust its clock in subsequent messages. If the error is not protected, the client MUST NOT use the difference to adjust subsequent messages, because doing so would allow an attacker to construct authenticators that can be used to mount replay attacks.

    Read the article

  • Matlab computations done over Apple Filing Protocol (AFP) depend on POSIX permissions, ignores ACLs

    - by flumignan
    I'm a system administrator and have never used Matlab, so forgive my general ignorance of the program. My users have encountered problems when executing scripted Matlab actions over AFP to a Mac OS X Server 10.6.7 where the access control list (ACL) should allow actions, but the POSIX-style permissions disallow the activity. It seems as if Matlab, run locally on the Mac workstations on datasets on the remote server, ignores the ACLs entirely. This is the only application I've ever seen behave this way. The server's filesystem is HFS+J and all other activity is performing as expected. These users cannot use CIFS because of our integration with external directory systems. In this example, the directory bxdata, the members of the group cibturner should be able to modify the files. Indeed, they can using any other method except via Matlab scripts. When the Matlab script hits these files, the POSIX permissions of 644 disallow modification. It's as if the ACLs are irrelevant. [root@cib 16:00:24 /14181.2_5sM]# ls -leh@ bxdata/ total 128 -rw-r--r--+ 1 kel32 staff 18K Feb 15 09:31 TS-5sMath030708-21073-1.edat 0: group:cibturner inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 1: group:cibsrlocaladmins inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 2: group:crcservergroup inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown -rw-r--r--+ 1 kel32 staff 25K Feb 15 09:31 TS-5sMath030708-21073-1.txt 0: group:cibturner inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 1: group:cibsrlocaladmins inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 2: group:crcservergroup inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown Because this server has HIPAA data, security is critical. We are not using networked home directories or SAN technology. The MatLab program is run on the user's hard drive; access is granted via Kerberized AFP.

    Read the article

  • Linux Experts Riddle: Network output of 10MB/s on 10GB/s NIC

    - by user150324
    I have two CentOS 6 servers. I am trying to transfer files between them. Source server has 10GB/s NIC nd destination server has 1GB/s NIC. Regardless to the command used nor the protocol, the transfer speed is ~1 Mega byte per second. The goal is at least couple dozens MB per second. I have tried: rsync (also with various encryptions), scp, wget, aftp, nc. Here's some testing results with iperf: [root@serv ~]# iperf -c XXX.XXX.XXX.XXX -i 1 ------------------------------------------------------------ Client connecting to XXX.XXX.XXX.XXX, TCP port 5001 TCP window size: 64.0 KByte (default) ------------------------------------------------------------ [ 3] local XXX.XXX.XXX.XXX port 33180 connected with XXX.XXX.XXX.XXX port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 1.30 MBytes 10.9 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 1.0- 2.0 sec 1.28 MBytes 10.7 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 2.0- 3.0 sec 1.34 MBytes 11.3 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 3.0- 4.0 sec 1.53 MBytes 12.8 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 4.0- 5.0 sec 1.65 MBytes 13.8 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 5.0- 6.0 sec 1.79 MBytes 15.0 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 6.0- 7.0 sec 1.95 MBytes 16.3 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 7.0- 8.0 sec 1.98 MBytes 16.6 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 8.0- 9.0 sec 1.91 MBytes 16.0 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 9.0-10.0 sec 2.05 MBytes 17.2 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.68 MBytes 14.0 Mbits/sec I guess HD is not the bottleneck here.

    Read the article

  • Kerberos service on win2k dc will not start following disk failure

    - by iwilson68
    Hi, I have a win2k (mixed mode domain) with 4 DCS. One of these also acts an exchange 2000 server which uses 2 logical volumes from an MSA 2000 array. AD etc is stored on local drives. We experienced a problem last week when the raid array fell back to a redundant controller and this temporarily meant that the two logical drives were not visible to the server for around 5 minutes and a couple of reboots. The log records these Events as Type: Warning Event Source: Disk Event Category: None Event ID: 51 Date: 06/11/2009 Time: 11:46:23 User: N/A Computer: server1 Description: An error was detected on device \Device\Harddisk1\DR1 during a paging operation. Following these problems, the server “kerberos Key Distribution” service refuses to start with an “error.31 a device attached to the system is not functioning”. All other automatic start services (including net logon) are running and there are no DNS issues etc. All devices are also functioning but the two logical MSA disks are now numbered in the Windows Disk Management MMC as 2 and 4 and I suspect that they may have previously been identified as disks 1 & 2 and perhaps windows still sees this as an ongoing failure?? Replication has not been affected but obviously there are many audit failures in the security log relating to users and workstations presumably linked to the Kerberos issue. Attempting to manually start the kerberos service generates the following in the System Log. Event Type: Error Event Source: Service Control Manager Event Category: None Event ID: 7023 Date: 09/11/2009 Time: 09:46:55 User: N/A Computer: Server1 Description: The Kerberos Key Distribution Center service terminated with the following error: A device attached to the system is not functioning. DCDIAG passes all tests except “Advertising” and “Services” which I believe relate directly to the failure of Kerberos only. Any advice would be appreciated.

    Read the article

  • Nginx - Enable PHP for all hosts

    - by F21
    I am currently testing out nginx and have set up some virtual hosts by putting configurations for each virtual host in its own file in a folder called sites-enabled. I then ask nginx to load all those config files using: include C:/nginx/sites-enabled/*.conf; This is my current config: http { server_names_hash_bucket_size 64; include mime.types; include C:/nginx/sites-enabled/*.conf; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; root C:/www-root; #charset koi8-r; #access_log logs/host.access.log main; location / { index index.html index.htm index.php; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server{ server_name localhost; } } And this is one of the configs for a virtual host: server { server_name testsubdomain.testdomain.com root C:/www-root/testsubdomain.testdomain.com; } The problem is that for testsubdomain.testdomain.com, I cannot get php scripts to run unless I have defined a location block with fastcgi parameters for it. What I would like to do is to be able to enable PHP for all hosted sites on this server (without having to add a PHP location block with fastcgi parameters) for maintainability. This is so that if I need to change any fastcgi values for PHP, I can just change it in 1 location. Is this something that's possible for nginx? If so, how can this be done?

    Read the article

  • Why does "commit" appear in the mysql slow query log?

    - by Tom
    In our MySQL slow query logs I often see lines that just say "COMMIT". What causes a commit to take time? Another way to ask this question is: "How can I reproduce getting a slow commit; statement with some test queries?" From my investigation so far I have found that if there is a slow query within a transaction, then it is the slow query that gets output into the slow log, not the commit itself. Testing In mysql command line client: mysql begin; Query OK, 0 rows affected (0.00 sec) mysql UPDATE members SET myfield=benchmark(9999999, md5('This is to slow down the update')) WHERE id = 21560; Query OK, 0 rows affected (2.32 sec) Rows matched: 1 Changed: 0 Warnings: 0 At this point (before the commit) the UPDATE is already in the slow log. mysql commit; Query OK, 0 rows affected (0.01 sec) The commit happens fast, it never appeared in the slow log. I also tried a UPDATE which changes a large amount of data but again it was the UPDATE that was slow not the COMMIT. However, I can reproduce a slow ROLLBACK that takes 46s and gets output to the slow log: mysql begin; Query OK, 0 rows affected (0.00 sec) mysql UPDATE members SET myfield=CONCAT(myfield,'TEST'); Query OK, 481446 rows affected (53.31 sec) Rows matched: 481446 Changed: 481446 Warnings: 0 mysql rollback; Query OK, 0 rows affected (46.09 sec) I understand why rollback has a lot of work to do and therefore takes some time. But I'm still struggling to understand the COMMIT situation - i.e. why it might take a while.

    Read the article

  • Windows Task Scheduler fails on EventData instruction

    - by Pete
    The Scheduled Task fails on the Event Data instruction in this XML: <ValueQueries> <Value name="eventChannel">Event/System/Channel</Value> <Value name="eventRecordID">Event/System/EventRecordID</Value> <Value name="eventData">Event/EventData/Data</Value> </ValueQueries> The other 2 fields can be passed as arguments and the EventData syntax matches other websites, so I don't know why it's failing. This is the Event Viewer XML: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Aptify.ExceptionManagerPublishedException" /> <EventID Qualifiers="0">0</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2013-11-07T19:39:14.000000000Z" /> <EventRecordID>97555</EventRecordID> <Channel>Application</Channel> <Computer>[Computer Name]</Computer> <Security /> </System> <EventData> <Data>General Information ********************************************* Additional Info: ExceptionManager.MachineName: [Computer Name] ExceptionManager.TimeStamp: 11/7/2013 12:39:14 PM ExceptionManager.FullName: AptifyExceptionManagement, Version=4.0.0.0, Culture=neutral, PublicKeyToken=[key] ExceptionManager.AppDomainName: Aptify Shell.exe ExceptionManager.ThreadIdentity: ExceptionManager.WindowsIdentity: ACA_DOMAIN\pbassett 1) Exception Information ********************************************* Exception Type: Aptify.Framework.BusinessLogic.GenericEntity.AptifyGenericEntityValidationException Entity: Tasks ErrorString: Task Type "Make Contact" is not active. MachineName: [machine] CreatedDateTime: 11/7/2013 12:39:14 PM AppDomainName: Aptify Shell.exe ThreadIdentityName: WindowsIdentityName: [identity] Severity: 0 ErrorNumber: 0 Message: Task Type "Make Contact" is not active. Data: System.Collections.ListDictionaryInternal TargetSite: Boolean Save(Boolean, System.String ByRef, Sys tem.String) HelpLink: NULL Source: AptifyGenericEntity StackTrace Information ********************************************* at Aptify.Framework.BusinessLogic.GenericEntity.AptifyGenericEntity.Save(Boolean AllowGUI, String& ErrorString, String TransactionID)</Data> </EventData> </Event>

    Read the article

  • Join ActiveDirectory (Win 2k8R2) to OpenDirectory(Snow Leopard)

    - by Tom O'Connor
    The vast majority of questions and so on regarding the interoperability of Active and Open directories involves getting Mac clients to see an AD and auth against it. What we'd like to do is get a Windows 7 workstation to auth completely against Open Directory. We tried setting it up as an NT4 type PDC, and that doesn't work satisfactorily. We tried using pGina and the LDAP backend, which allows Authentication, but has no support for Authorization, and as a result, if we mount an NFS Share, the user has the rights to do anything they damn well please. Not ideal for security (Totally bloody unacceptable, actually). We tried using a Samba server (newer version than on the Open Directory Server) as an intermediate, so that it knows about the LDAP server on the OD Server, but uses Samba 4 instead of v3. That didn't work either. We could login, but couldn't mount, and if we did, we had the same rights as with pGina. If we right-click the mounted drive in Windows, and have a look at NFS UID, it returns -2, not the correct (mapped) UID. So the final plan I've got is to use an Active Directory, inside a Windows 2008R2 Virtual Machine. What I want to achieve is to have the Active Directory sync it's user data from OpenDirectory (read-only would be fine). That way, we'd have the ability to connect Windows 7 clients to a "virtual domain" which would actually just grab information from OD's LDAP. All the information I've found is about how to go the other way. Does anyone know how we can do this?

    Read the article

  • Rsync: General file/folder synchronization

    - by Rey Leonard Amorato
    I have a file server, which is in-charge of pulling a folder tree from multiple workstations on a daily basis. My current method for this is by using rsync, (which works pretty well provided directory names and/or files remain the same) however, when files are renamed or moved about within subdir1, rsync will copy them over to the server, creating duplicates. I have to manually find and delete extraneous files/folders that had been left on the server during previous syncs. Note that I cannot use rsync's --delete flag because any sync from a workstation will then mirror that particular folder tree, instead of merging them to the server. Visual diagram: Server: Workstation1 Workstation2 Workstation(n) Folder* Folder* Folder* Folder* -subdir1 -subdir1 -subdir1 -subdir(n) -file1 -file1 -file2 -file(n) -file2 -file(n) Is there a simple script (preferably in bash, nothing fancy) that can accomplish the deletion of the extraneous files/folders in the event a file is renamed or moved to a different subdir? Is there a different program, much like rsync that can accomplish this task autonomously and in a much simpler manner? I have looked at unison, but I did not like the fact that it keeps a local database for the syncing info. Any tips at all as to how I am supposed to tackle this? Thank you in advanced for your help. EDIT: I have tried unison just recently and I can safely say it is out of the question now. unison is a bi-directional synchronization tool and from my testing, it mirrors the files existing on the server to all workstations. - This is unwanted. preferably, i would want files/folders to stay within their respective workstations and just merge to the server. AKA uni-directional sync; but with renames/moves propagated to the server. I might have to look into Git/Mercurial/Bazaar as mentioned by kyle, but still unsure if they are fit for the job.

    Read the article

  • 500 error with deploying rails application via apache2+passenger

    - by user1633983
    I finally completed my own app, so the only work left is deploying the app. I'm using Ubuntu 10.04 and apache2(installed by apt-get), so I'm trying to deploy through passenger. I installed passenger gem like this: sudo gem install passenger rvmsudo passenger-install-apache2-module and I configured apache settings as what the installation message says. I added below lines in the middle of /etc/apache2/apache2.conf file. LoadModule passenger_module /home/admin/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.17/ext/apache2/mod_passenger.so PassengerRoot /home/admin/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.17 PassengerRuby /home/admin/.rvm/wrappers/ruby-1.9.3-p194/ruby and, I appended below lines in /etc/apache2/sites-available/default file. <VirtualHost *:80> ServerName localhost # !!! Be sure to point DocumentRoot to 'public'! DocumentRoot /home/admin/homepage/public <Directory /home/admin/homepage/public> # This relaxes Apache security settings. AllowOverride all # MultiViews must be turned off. Options -MultiViews </Directory> But when I restart the apache service and hit the address, 500 error occurs. At first, it was same 500 error but the 500 error page is from apache's, but when I reinstalled the libapache2-module-passenger, the 500 error page is changed to that from rails'. Because of rails' 500 error page(which is located at public/500.html), I think passenger module is properly connected with apache. What should I do to fix this problem? Do I need to configure something inside my app before deployment?

    Read the article

  • nginx rewrite or internal redirection cycle

    - by gyre
    Im banging my head against a table trying to figure out what is causing redirection cycle in my nginx configuration when trying to access URL which does not exist Configuration goes as follows: server { listen 127.0.0.1:8080; server_name .somedomain.com; root /var/www/somedomain.com; access_log /var/log/nginx/somedomain.com-access.nginx.log; error_log /var/log/nginx/somedomain.com-error.nginx.log debug; location ~* \.php.$ { # Proxy all requests with an URI ending with .php* # (includes PHP, PHP3, PHP4, PHP5...) include /etc/nginx/fastcgi.conf; } # all other files location / { root /var/www/somedomain.com; try_files $uri $uri/ ; } error_page 404 /errors/404.html; location /errors/ { alias /var/www/errors/; } #this loads custom logging configuration which disables favicon error logging include /etc/nginx/drop.conf; } this domain is a simple STATIC HTML site just for some testing purposes. I'd expect that the error_page directive would kick in in response to PHP-FPM not being able to find given files as I have fastcgi_intercept_errors on; in http block and nave error_page set up, but I'm guessing the request fails even before that somewhere on internal redirects. Any help would be much appreciated.

    Read the article

  • Disabling Skype automatic update

    - by user13267
    How to stop skype from searching or at least downloading update without consent? I want that annoying "Update skype now" dialog box that keeps popping up before I log in to Skype and after I log in to Skype from appearing at all. Few months ago this used to work: 1) C:\Users\”YourName”\AppData\Local\Temp folder. 2) Find the file called SkypeSetup.exe, and delete it. 3) Create a text file in the folder, rename it to SkypeSetup.exe 4) Right click on the new file you just created and ask for properties. 5) Next left click the security tab then left click the advanced button. 6) Now left click “Change Permissions” and then “Add”. Enter “Everyone” (without the quotes) where it sez’, “Enter the object name to select (examples):” and click “OK”. 7) Now check the “Deny” box for “Full control” and click “OK”. obtained from HERE, but now it seems this has stopped working. The worst part is Skype seems to download ~30MB of executable setup file without my knowledge before bugging me with the dialog box to update it, and there seems to be no direct way to disable this download. And disabling the skype updater service does not seem to work either. Is there any kind of patch or registry hack I can use to stop skype from auto updating? Or should I start looking for an alternative to Skype altogether?

    Read the article

  • Setting up CIFS ISO Repository for Xen

    - by user85610
    I recently started working with Xen, to try to make better use of an extra desktop box for development testing. I'd like to be able to do OS installs on it without having to burn discs, but I'm having some trouble actually being able to get it to boot OS ISOs from a Windows share. My Windows box is running Win 7, and it's on a domain. I created a CIFS ISO SR in Xen, specifying the correct username and password to use. Xen is able to scan the share, and I see the ISOs that are in the folder, and can select them in the list in XenCenter. However, when I try to start the VM, I get "Error: Starting VM 'linxcentos' - INVALID_SOURCE - Unable to access a required file in the specified repository: file:///tmp/cdrom-repo-hIz-H7/isolinux/vmlinuz." I tried booting a different Linux ISO and got the same result. I know that the ISOs are valid because I was able to install them without issue when I tried VMWare ESXi earlier. What am I missing here? It's Xen/XenCenter 6 and I'm trying to install the newest version of Centos. I may end up burning it for now, but I'd like to get this to work, at least just for the principle of not letting mysterious behaviors go unsolved...

    Read the article

< Previous Page | 699 700 701 702 703 704 705 706 707 708 709 710  | Next Page >