Search Results

Search found 77950 results on 3118 pages for 'large file upload'.

Page 1762/3118 | < Previous Page | 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769  | Next Page >

  • Prevent Network Printers Automatically being added to 'Devices and Printers' in Windows 7

    - by Ben
    I think my question is a duplicate of this one, but the original question was never properly answered (the steps described are for Windows XP). I am aware of the option to "Turn off Network Discovery" (under Control Panel All Control Panel Items Network and Sharing Center Advanced sharing settings); I set this option (for both Home/Work and Private) but it doesn't seem to stop the printers getting added, and has the side effect of preventing me from browsing the list of machines on the network (which I need). I've tried the Windows XP registry option - but it doesn't seem to make any difference: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced\NoNetCrawling I find it annoying having the printer list cluttered with printers from all over the office which I am never going to use (especially since lots of them no longer physically exist, users just haven't deleted them from their machines). This must be a real problem for people in massive offices with large numbers of printers - but I can't seem to find a lot of people complaining about it - which makes me think I'm missing something obvious. I don't really want to hack the firewall or turn off sharing completely, I still want to select and use network printers and file shares. Any ideas?

    Read the article

  • How to add URL's to wiki (MediaWiki) powered documentation?

    - by Ian Boyd
    We have an internal company wiki. The wiki engine being used is MediaWiki, the wiki engine that runs Wikipedia. Some of it contains IT stuff. One of the things i want want to have are hyperlinks to the various virtual machines. An example of a command, as it needs to run, is: vmrc://solo.avatopia.com:5901/Windows 2000 Server My first thought was to convert the URL into a link: [vmrc://solo.avatopia.com:5901/Windows 2000 Server] But the content renders literally as above: with the square brackets and all. Testing with other URL protocols: [http://solo.avatopia.com] [ftp://solo.avatopia.com] [ldap://solo.avatopia.com] [vmrc://solo.avatopia.com] Only the first two work, and are converted to hyperlinks. The other two remain as liternal text. How can i add URLs to MediaWiki powered documentation? Original Question We have an internal company wiki. The wiki engine being used is MediaWiki, the wiki engine that runs Wikipedia. Some of it contains IT stuff. One of the things i want want to have are hyperlinks to the various virtual machines. An example of a command, as it needs to run, is: \\solo\VMRC Client\vmrc.exe solo.avatopia.com:5901/Windows 2000 Server If launching from a command prompt, you have to quote the spaces: C:\>"\\solo\VMRC Client\vmrc.exe" solo.avatopia.com:5901/"Windows 2000 Server" My first thought in converting the above for use on our wiki-site, is to simply HTML-ify it: file://\\solo\VMRC Client\vmrc.exe solo.avatopia.com:5901/&quot;Windows 2000 Server&quot; but MediaWiki only converts file://\solo\VMRC to a hyperlink, the remainder is text. i've tried other random things, including enclosing the URL in square brackets. What is the correct answer? i don't want to happen to randomly stumble on some format that happens to work today, and breaks in the future.

    Read the article

  • Search for files after a relative date using Windows search

    - by Zoredache
    I am looking for a way to save a search that includes a relative date. Specifically I am looking for a way to save a search that matches files that have a modification date that is 7 days ago. I have read the Windows Search Advanced Query Syntax document and I am not seeing a way to say 7 days ago. The numbers and ranges section does mention that relative dates are possible. The problem is that the relative dates described there do not fit the criteria I need. The lastweek almost looks like what I want except if I run a query like after:lastweek on a Monday it will only show my file that have been modified since Sunday at 12:00. The lastweek/lastmonth seem to relative to the start of the week/month which is not what I need. Multi-word relative dates: week, next month, last week, past month, or coming year. The values can also be entered contracted, as follows: thisweek, nextmonth, lastweek, pastmonth, comingyear. One nice thing about saved searches is that they are stored as an XML document and the file format is documented. I am not seeing how to form a correct value for a datetime. If I was able to understand this format, I suspect I could use a text editor and created a saved search that does what I want. Fragment from the examples: <conditions> <condition type="leafCondition" valuetype="System.StructuredQueryType.DateTime" property="System.DateModified" operator="imp" value="R00UUUUUUUUZZXD-30NU" propertyType="wstr" /> </conditions> To summarize I am looking for an answer to one or both of these questions How do I make a query for '7 days ago' using the standard syntax? How is the DateTime stored in a saved search?

    Read the article

  • IIS High use & Server Performance issues

    - by HaydnWVN
    Have an SBS2011 running Exchange, a database app and a few other things serving 5 users (3 low use, 1 high). The server was never specced for the database app so it isn't as powerful as I'd like... Only 12GB RAM. We have increasingly found performance problems with this server, last week it was so bad I couldn't even connect remotely. To free up some available RAM I have (over the past month or so): Restricted the Exchange Message Store to 1GB with (so far) no ill effects. Restricted SQL Databases (including SBSMonitoring and Sharepoint/##SSEE (Which isn't used)). Now I am finding that IIS Worker threads are using up the available memory and I have (so far) been unable to track down much useful information about restricting them. This server is not 'serving' anything web-based apart from OWA that I am finding people using because Outlook is so slow (again related to the Servers performance). I am aware that Exchange on SBS2011 is designed to use up available resources (and concede when other applications request). But it is not doing so (or anywhere near fast enough) for our needs. Opening the database application (using Postgres) takes 5+ minutes from client machines and regularly times out or crashes due to this. After a reboot (before SQL/Exchange/IIS databases are very large/totally cached) we get the performace we need and expect. Previously a reboot once a month was enough... Then once a week... Now they have taken to rebooting it almost daily!

    Read the article

  • How small (spec wise) can a virtual machine be and still boot up and run some sort of OS?

    - by IllvilJa
    One of the advantages with virtual machines is that you can be very flexible with their sizes. If the host system permits it, you can have a very large virtual machine with a lot of virtual RAM and disk. Also, you can decide to go the other way around, to give the virtual machine a very modest amount of RAM and disk and then choose and configure the OS appropriately. The question is, how small virtual machines have people managed to setup (and get to both boot up and to run)? Virtual machines doing something usuful is preferable, even if I know "useful" in this context is awfully subjective, but laboratory-cases with a configuration stripped beyond common sense could be intresting as well, just to see what people manage to boot and run. Quite open ended question and quite academic, but think of it: an extremely small VM (which still does something useful) takes very little memory and disk and can be quite quickly saved to and restored from disk. If it's also gentle on CPU resources, one might consider having a huge number of such VMs up and running on a host. (Imagine a VM running just an old Commodore 64 or Commodore Amiga in it. Ok, way wrong CPU architecture for modern Virtualization software running on a x86-based PC but still an interesting thought. You could have quite a few such small VMs running on a modern PC.)

    Read the article

  • mpirun -np N, what if N is larger than my core number?

    - by Daniel
    Say I have a 4-core workstation, what would linux (Ubuntu) do if I execute mpirun -np 9 XXX Q1. Will 9 run immediately together, or they will run 4 after 4? Q2. I suppose that using 9 is not good, because the remainder 1, it will make the computer confused, (I don't know is it going to be confused at all, or the "head" of the computer will decide which core among the 4 cores will be used?) Or it will be randomly picked. Who decide which one core to call? Q3. If I feel my cpu is not bad and my ram is okay and large enough, and my case is not very big. Is it a good idea in order to fully use my cpu and ram, that I do mpirun -np 8 XXX, or even mpirun -np 12 XXX. Q4. Who decides all of these effciency optimization, Ubuntu, or linux, or motherboard or cpu? Your enlightenment would be really appreciated.

    Read the article

  • Can't log in after restoring from Time Machine

    - by Jay Conrod
    My friend uses a Macbook Pro with Snow Leopard 10.6.2. She uses both FileVault and Time Machine to preserve her data. Recently, she suffered a hard disk failure. After restoring from Time Machine using the Snow Leopard install disk, she gets the following error when logging in: You are unable to log in to the FileVault user account at this time. Logging into the account failed because an error occurred. When examining the file system through Terminal, I noticed her home directory is not present: there is no /Users/username directory, or the FileVault .sparsebundle file that's supposed to be there. When using Time Machine.app on /Users, it appears as if her home directory as never there. Additionally, I did a search on the backup disk with the following command: sudo find /Volumes/backup -name '*.sparsebundle' No results. She told me that after working with some large data files, Time Machine would come on, and it would sound like it was transferring a lot of data to the hard disk. Time Machine must have been doing something, right? How can we recover her files? Are they still there?

    Read the article

  • How to deduplicate 40TB of data?

    - by Michael Stauffer
    I've inherited a research cluster with ~40TB of data across three filesystems. The data stretches back almost 15 years, and there are most likely a good amount of duplicates as researchers copy each others data for different reasons and then just hang on to the copies. I know about de-duping tools like fdupes and rmlint. I'm trying to find one that will work on such a large dataset. I don't care if it takes weeks (or maybe even months) to crawl all the data - I'll probably throttle it anyway to go easy on the filesystems. But I need to find a tool that's either somehow super efficient with RAM, or can store all the intermediary data it needs in files rather than RAM. I'm assuming that my RAM (64GB) will be exhausted if I crawl through all this data as one set. I'm experimenting with fdupes now on a 900GB tree. It's 25% of the way through and RAM usage has been slowly creeping up the whole time, now it's at 700MB. Or, is there a way to direct a process to use disk-mapped RAM so there's much more available and it doesn't use system RAM? I'm running CentOS 6.

    Read the article

  • Nginx's speed, and how to replicate it [migrated]

    - by Mediocre Gopher
    I'm interested in this from more than an academic standpoint rather than a practical standpoint; I don't plan on creating a production webserver to compete with nginx. What I'm wondering is how exactly nginx is so fast. The top google response for this is this thread, but it merely links to a cryptic slideshow and a general covering of different io strategies. All other results seem to simply describe how fast nginx is, rather then the reason. I tried building a simple erlang server to try to compete with nginx, but to no avail; nginx won out. All my server does is spawn a new process for each request, uses that process to read the file to a socket, then closes the file and kills the thread. It's not complicated, but given erlang's lightweight processes and underlying aio structure I thought it would compete, but nginx still wins out by a consistent 300 ms average under a heavy stress test. What is nginx doing that my simple server isn't? My first thought would be keeping files in main memory instead of tossing them between requests, but the filesystem cache does this already so I didn't think it would make that great of difference. Am I wrong? Or is there something else that I'm missing?

    Read the article

  • kickstart ks.cfg: Where should 'url' point?

    - by Stefan Lasiewski
    I have a kickstart file (ks.cfg) on a floppy (Old style). I am trying to install CentOS 5.4. The top of my ks.cfg says this: install # Install from local cdrom or over the network. #cdrom url --url http://kickstart.example.org/pub/centos/5.4/ On the Apache server side, this command is failing with these 404s: kickstart.example.org 192.168.16.180 - - [01/Jun/2010:17:24:30 -0700] "GET /pub/centos/5.4///disc1/.discinfo HTTP/1.1" 404 314 "-" "urlgrabber/3.1.0" kickstart.example.org 192.168.16.180 - - [01/Jun/2010:17:24:43 -0700] "GET /pub/centos/5.4/repodata/repomd.xml HTTP/1.1" 404 316 "-" "urlgrabber/3.1.0 yum/3.2.22" It seems that the value of my url doesn't match the directory structure on the server. I swear this worked a few months ago. Someone else maintains the Yum repository, and they say nothing has changed. What should the value of url URL be? Should this only include the OS (/pub/centos/5.4/), or should it include the architecture (/pub/centos/5.4/os/x86_64 )? I see that Kickstart is trying to grab a file called 'repomd.xml', but why is it looking in '/pub/centos/5.4/repodata/repomd.xml', when these files actually exist at '/pub/centos/5.4/os/x86_64/repodata/repomd.xml' and other locations at '/pub/centos/5.4/*/$ARCH/repodata/repomd.xml'? I don't see this documented or explained well in the [RedHat 5 Installation Guide1]

    Read the article

  • VirtualBox sound problem under Ubuntu

    - by VoY
    As I recently upgraded to karmic I started to see the following stuff in the logs when I run VirtualBox: Oct 30 18:14:34 apocalypse pulseaudio[2813]: alsa-source.c: Resume failed, couldn't restore original fragment settings. (Old: 65536/65536, New 1073676288/65532)io[2813]: alsa-source.c: Resume failed, couldn't restore original fragment settings. (Old: 65536/65536, New 1073676288/65532)io[2813]: alsa-source.c: Resume failed, couldn't restore original fragment settings. (Old: 65536/65536, New 1073676288/65532)io[2813]: alsa-source.c: Resume failed, couldn't restore original fragment settings. (Old: 65536/65536, New 1073676288/65532)io[2813]: alsa-source.c: Resume failed, couldn't restore original fragment settings. (Old: 65536/65536, New 1073676288/65532)io[2813]: alsa-source.c: Resume failed, couldn't restore original fragment settings. (Old: 65536/65536, New 1073676288/65532)io[2813]: alsa-source.c: Resume failed, couldn't restore original fragment settings. (Old: 65536/65536, New 1073676288/65532)io[2813]: alsa-source.c: Resume failed, couldn't restore original fragment settings. (Old: 65536/65536, New 1073676288/65532)io[2813]: alsa-source.c: Resume failed, couldn't restore original fragment settings. (Old: 65536/65536, New 1073676288/65532)io[2813]: alsa-source.c: Resume failed, couldn't restore original fragment settings. (Old: 65536/65536, New 1073676288/65532)io[2813]: alsa-source.c: Resume failed, couldn't restore original fragment settings. (Old: 65536/65536, New 1073676288/65532)io[2813]: alsa-source.c: Resume failed, couldn't restore original fragment settings. (Old: 65536/65536, New 1073676288/65532) Oct 30 18:14:34 apocalypse pulseaudio[2813]: alsa-source.c: Resume failed, couldn't restore original fragment settings. (Old: 65536/65536, New 1073676288/65532) After a while the logs grow to large sizes and fill up all of my /var partition. In VirtualBox there is an option to choose between pulseaudio and alsa for sound, but it seems to have no effect. I am using virtualbox-3.0 packages, not the ose version. My system is up to date.

    Read the article

  • Is it possible to re-cab an Administrative Install Point?

    - by Nathaniel Bannister
    We have Acrobat 8 Pro at work, and our media was painfully out of date. Rather than install all of the machines at 8.0.0 and then do the 6 or 7 consecutive reboots adobe expects you to be ok with I decided I'd integrate the .msp files into the installer. After reading up on it, I figured out the exact patch order that adobe required, extracted my cd to an Administrative install point, and ran the patches against it: msiexec /a AcroPro.msi /p AcrobatUpd810_efgj_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd811_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd812_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd813_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd816_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd817_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd820_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd822_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd823_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd825_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd826_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" Now I have a AIP that is fully patched to 8.2.6 (Tested working prior to attempting to CAB it), but is absolutely huge (1.2gb) what I would like to do is take the folders within the AIP and put them back into a cab file for the sake of convenience in transferring the files around. I tried the command: cscript "C:\Program Files\Microsoft SDKs\Windows\v7.0\Samples\sysmgmt\msi\scripts\WiMakCab.vbs" AcroPro.msi Data1 /L /C /S Per the guide I was using, while this did produce the cab file I Wanted, however the resulting MSI fails to install with an error 2602: It's been a while since I've done something like this, and it's probably a glaring oversight on my part, but any insight would be much appreciated.

    Read the article

  • Files built with a makefile are disapearing (including the binary)

    - by Reid
    I am building a program on a TS-7800(SBC), and when I run make (show below), it appears to go through all of the steps normally, but in the end i do not get a binary file. Why is this, and how can I get my file. makefile CC= /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc # compiler options #CFLAGS= -O2 CFLAGS= -mcpu=arm9 #CFLAGS= -pg -Wall # linker LN= $(CC) # linker options LNFLAGS= #LNFLAGS= -pg # extra libraries used in linking (use -l command) LDLIBS= -lpthread # source files SOURCES= HMITelem.c Cpacket.c GPS.c ADC.c Wireless.c Receivers.c CSVReader.c RPM.c RS485.c # include files INCLUDES= Cpacket.h HMITelem.h CSVReader.h RS485.h # object files OBJECTS= HMITelem.o Cpacket.o GPS.o ADC.o Wireless.o Receivers.o CSVReader.o RPM.o RS485.o HMITelem: $(OBJECTS) $(LN) $(LNFLAGS) -o $@ $(OBJECTS) $(LDLIBS) .c.o: $*.c $(CC) $(CFLAGS) -c $*.c RUN : ./HMITelem #clean: # rm -f *.o # rm -f *~ Output root@ts7800:ReidTest# make /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c HMITelem.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c Cpacket.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c GPS.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c ADC.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c Wireless.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c Receivers.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c CSVReader.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c RPM.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c RS485.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -o HMITelem HMITelem.o Cpacket.o GPS.o ADC.o Wireless.o Receivers.o CSVReader.o RPM.o RS485.o -lpthread Thank you.

    Read the article

  • Trying to build OpenSimulator.. nant fails

    - by Gary
    Output of nant is: Buildfile: file:///root/opensim-0.6.8-release/OpenSim.build Target framework: Mono 2.0 Profile Target(s) specified: build [echo] Using 'mono-2.0' Framework init: Debug: [echo] Platform unix build: [nant] /root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/OpenSim.Framework.Servers.HttpServer.dll.build build Buildfile: file:///root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/OpenSim.Framework.Servers.HttpServer.dll.build Target framework: Mono 2.0 Profile Target(s) specified: build build: [echo] Build Directory is /root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/bin/Debug [csc] Compiling 29 files to '/root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/bin/Debug/OpenSim.Framework.Servers.HttpServer.dll'. [csc] /root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/AsynchronousRestObjectRequester.cs(103,41): error CS0246: The type or namespace name `TResponse' could not be found. Are you missing a using directive or an assembly reference? [csc] Compilation failed: 1 error(s), 0 warnings BUILD FAILED - 0 non-fatal error(s), 1 warning(s) /root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/OpenSim.Framework.Servers.HttpServer.dll.build(14,6): External Program Failed: /usr/lib/pkgconfig/../../lib/mono/2.0/gmcs.exe (return code was 1) Total time: 1.2 seconds. BUILD FAILED Nested build failed. Refer to build log for exact reason. Total time: 1.3 seconds. OS is Fedora 7. Any ideas appreciated. :)

    Read the article

  • Can't login via ssh after upgrading to Ubuntu 12.10

    - by user42899
    I have an Ubuntu 12.04LTS instance on AWS EC2 and I upgraded it to 12.10 following the instructions at https://help.ubuntu.com/community/QuantalUpgrades. After upgrading I can no longer ssh into my VM. It isn't accepting my ssh key and my password is also rejected. The VM is running, reachable, and SSH is started. The problem seems to be about the authentication part. SSH has been the only way for me to access that VM. What are my options? ubuntu@alice:~$ ssh -v -i .ssh/sos.pem [email protected] OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /home/ubuntu/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to www.hostname.com [37.37.37.37] port 22. debug1: Connection established. debug1: identity file .ssh/sos.pem type -1 debug1: identity file .ssh/sos.pem-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: RSA 33:33:33:33:33:33:33:33:33:33:33:33:33:33 debug1: Host '[www.hostname.com]:22' is known and matches the RSA host key. debug1: Found key in /home/ubuntu/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: .ssh/sos.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey,password debug1: Next authentication method: password [email protected]'s password: debug1: Authentications that can continue: publickey,password Permission denied, please try again.

    Read the article

  • Why does "commit" appear in the mysql slow query log?

    - by Tom
    In our MySQL slow query logs I often see lines that just say "COMMIT". What causes a commit to take time? Another way to ask this question is: "How can I reproduce getting a slow commit; statement with some test queries?" From my investigation so far I have found that if there is a slow query within a transaction, then it is the slow query that gets output into the slow log, not the commit itself. Testing In mysql command line client: mysql begin; Query OK, 0 rows affected (0.00 sec) mysql UPDATE members SET myfield=benchmark(9999999, md5('This is to slow down the update')) WHERE id = 21560; Query OK, 0 rows affected (2.32 sec) Rows matched: 1 Changed: 0 Warnings: 0 At this point (before the commit) the UPDATE is already in the slow log. mysql commit; Query OK, 0 rows affected (0.01 sec) The commit happens fast, it never appeared in the slow log. I also tried a UPDATE which changes a large amount of data but again it was the UPDATE that was slow not the COMMIT. However, I can reproduce a slow ROLLBACK that takes 46s and gets output to the slow log: mysql begin; Query OK, 0 rows affected (0.00 sec) mysql UPDATE members SET myfield=CONCAT(myfield,'TEST'); Query OK, 481446 rows affected (53.31 sec) Rows matched: 481446 Changed: 481446 Warnings: 0 mysql rollback; Query OK, 0 rows affected (46.09 sec) I understand why rollback has a lot of work to do and therefore takes some time. But I'm still struggling to understand the COMMIT situation - i.e. why it might take a while.

    Read the article

  • Apache wont start after attempting to install SSL

    - by yummm
    Below is what my VirtualHosts look like in httpd.conf <VirtualHost *:80> # Admin email, Server Name (domain name) and any aliases ServerAdmin [email protected] ServerName mydomain.com ServerAlias www.mydomain.com # Index file and Document Root (where the public files are located) DirectoryIndex index.php DocumentRoot /home/mydomain/public_html/mydomain.com/public # Custom log file locations LogLevel warn ErrorLog /home/mydomain/public_html/mydomain.com/log/error.log CustomLog /home/mydomain/public_html/mydomain.com/log/access.log combined </VirtualHost> <VirtualHost *:443> SSLEngine on SSLCertificateFile /etc/httpd/conf/ssl.crt/mydomain.com.crt SSLCertificateKeyFile /etc/httpd/conf/ssl.key/mydomain.com.key ServerName mydomain.com DirectoryIndex index.php DocumentRoot /home/mydomain/public_html/mydomain.com/public </VirtualHost> I'm using the latest version of Apache on CentOS and there isn't any error being generated. Apache just will not start. Any ideas what I'm doing wrong? UPDATE - Found these messages in the error log: [Tue Mar 16 02:07:57 2010] [error] Init: Private key not found [Tue Mar 16 02:07:57 2010] [error] SSL Library Error: 218710120 error:0D094068:asn1 encoding routines:d2i_ASN1_SET:bad tag [Tue Mar 16 02:07:57 2010] [error] SSL Library Error: 218529960 error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag [Tue Mar 16 02:07:57 2010] [error] SSL Library Error: 218595386 error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error [Tue Mar 16 02:07:57 2010] [error] SSL Library Error: 218734605 error:0D09A00D:asn1 encoding routines:d2i_PrivateKey:ASN1 lib

    Read the article

  • Linking to network shares from Sharepoint pages

    - by Russell C
    So the place I work decided to set up a Microsoft Sharepoint 2010 server for task management and I (as the lowly entry-level intern) have been tasked with "figuring it out." One thing that the end users really, really, really, want is the ability to link to network shares (that are readable by anyone who will be using sharepoint) from a Sharepoint web page. In order to do this, I have edited the HTML manually with several lines that look like the following: <a href="file://server/share">Server Share</a> This works (sometimes) but the link reported by Sharepoint is often wrong and editing pages that contain these links will mangle the code such that when I open it, the code no longer looks like what it did when I last hit save (breaking all those links). Obviously this is not sustainable. I've been told by coworkers that "It worked that way at the last place I worked" but I haven't found out how yet. Any ideas on how this would work or am I barking up the wrong tree? None of the knowledge searches I've done shed any light on the sitataion. Thanks for any help! -Russell P.S. It should be noted that the file option in an href tag ONLY works in IE (which is a real bummer since we mostly use Firefox).

    Read the article

  • JavaScript is not pointing correctly on IIS7 running behind Apache mod_proxy

    - by sohum
    So here's my setup. I've got a DynDNS account since I have a dynamic IP. I have Apache listening on port 80 and IIS7 on port 8080. I don't want users to have to enter in mydyndns.dyndns.com:8080 to get to IIS7, so I've added the following code to my Apache httpd.conf file to enable a proxy/reverse proxy: <VirtualHost *:80> ProxyPass / http://localhost:8080/myASPSite/ ProxyPassReverse / http://localhost:8080/myASPSite/ ServerName myaspsite.mydomain.com </VirtualHost> I've got a CNAME record set up on my DNS so that myaspsite.mydomain.com redirects to mydyndns.dyndns.com. When I type in myaspsite.mydomain.com into my browser, everything works beautifully... mostly. IIS7 serves up the ASPX pages and visitors to the site don't know any better. A problem arises, however, when I add Ajax Control Toolkit controls into my ASPX website, because these generate JavaScript and apparently mod_proxy_html isn't geared to handle the JS URIs properly. Sure enough, when I open up the source of my ASPX page, it has script elements as follows: <script src="/myASPSite/WebResource.axd?xyz" type="text/javascript"></script> <script src="/myASPSite/ScriptResource.axd?xyz" type="text/javascript"></script> Sure enough, these scripts are attempting to be resolved at http://myaspsite.mydomain.com/myASPSite/WebResource..., which through the proxy translates to localhost:8080/myASPSite/myASPSite/.... How can I solve this problem. The couple of websites I found suggested turning on ProxyHTMLExtended but when I tried doing that, the server did not start. I'm guessing I didn't know how to do it properly. Anyone has a handy couple of config lines that I can add to my Apache conf file to get this working as I need? I'm using Apache 2.2.11. Thanks!

    Read the article

  • amavisd + postfix + dovecot blocks gif images

    - by David W
    I occasionally have a client who tries to email me and says his email gets blocked by my server. When I check the logs, I see this: Sep 6 18:12:52 myers amavis[15197]: (15197-08) p.path BANNED:1 [email protected]: "P=p003,L=1,M=multipart/mixed | P=p002,L=1/2,M=application/ms-tnef,T=tnef,N=winmail.dat | P=p004,L=1/2/1,T=image,T=gif,N=image001.gif,N=image001.gif", matching_key="(?-xism:^\\.(exe|lha|tnef|cab|dll)$)" And then a little later... Sep 6 18:12:58 myers amavis[15197]: (15197-08) Blocked BANNED (.image,.gif,image001.gif,image001.gif), [213.199.154.205] [157.56.236.229] <[email protected]> - > <[email protected]>, quarantine: banned-g4QhZGvwJvDF, Message-ID <6A9596BE385EC1499F83E464FA9ECCA20C668320@BY2PRD0611MB417.namprd06.prod.outlook.com>, mail_id: g4QhZGvwJvDF, Hits: -, size: 20916, 8439 ms From this and the bounce that he forwards me (to a different address I give him), I determine that its bouncing because of the file in his signature (image001.gif). However, that does NOT match the "key" in this part of the log: matching_key="(?-xism:^\\.(exe|lha|tnef|cab|dll)$)" Furthermore, the .gif extension is nowhere to be found in the /etc/amavisd.conf file (i.e. I'm not blocking emails because they contain .gif images). Am I missing something here? This is strange... and annoying.

    Read the article

  • Permissions on mac for itunes library with multiple users - idea

    - by John
    I currently have a lot of music on an external drive and my itunes set up from there. However, periodically, when the external drive isn't connected, itunes will default back to the library location of my home directory user path. I don't want to mess with an external drive, as my mac HD is large enough to house the music collection. However, I have 4 family members - all with their own logins - using this same gob of music. I don't want 4 copies of the library, only one with all libraries referencing it. So, what I want to do is: 1 - move all music files to a shared directory at /Macintosh HD/users/music. I created this directory and adjusted permissions, so all four users can read and write to this directory. 2 - get all four accounts to reference this library instead of the external or local home locations I am hoping I can just check the box to keep library organized in my account, which is the admin and let itunes move it all. Then delete current libraries for each account and re-add from the new shared location. Will the itunes organization process cause permissions issues either by setting permissions to all the files access to my account only or write permissions or any other 'gotcha'? I am having a hard time coming up with a smooth solution that won't break everything and cause me to have mega duplicates or access issues. I would prefer not to do any xml library file editing if possible. Am I dreaming? Thanks for help.

    Read the article

  • OpenVPN server throws an "access denied" error

    - by HackToHell
    OpenVPN refuses to start up and exists with this error ever since i upgraded Ubuntu from 1.04 to 11.10 Dec 14 19:12:38 oogle ovpn-server[32150]: OpenVPN 2.2.0 i686-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Jul 4 2011 Dec 14 19:12:38 oogle ovpn-server[32150]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 14 19:12:38 oogle ovpn-server[32150]: Note: cannot open openvpn-status.log for WRITE Dec 14 19:12:38 oogle ovpn-server[32150]: Note: cannot open ipp.txt for READ/WRITE Dec 14 19:12:38 oogle ovpn-server[32150]: Diffie-Hellman initialized with 1024 bit key Dec 14 19:12:38 oogle ovpn-server[32150]: Cannot load private key file server.key: error:0200100D:system library:fopen:Permission denied: error:20074002:BIO routines:FILE_CTRL:system lib: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib Dec 14 19:12:38 oogle ovpn-server[32150]: Error: private key password verification failed Dec 14 19:12:38 oogle ovpn-server[32150]: Exiting Dec 14 19:12:46 oogle ovpn-server[32201]: OpenVPN 2.2.0 i686-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Jul 4 2011 Dec 14 19:12:46 oogle ovpn-server[32201]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 14 19:12:46 oogle ovpn-server[32201]: Note: cannot open openvpn-status.log for WRITE Dec 14 19:12:46 oogle ovpn-server[32201]: Note: cannot open ipp.txt for READ/WRITE Dec 14 19:12:46 oogle ovpn-server[32201]: Diffie-Hellman initialized with 1024 bit key Dec 14 19:12:46 oogle ovpn-server[32201]: Cannot load private key file server.key: error:0200100D:system library:fopen:Permission denied: error:20074002:BIO routines:FILE_CTRL:system lib: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib Dec 14 19:12:46 oogle ovpn-server[32201]: Error: private key password verification failed Dec 14 19:12:46 oogle ovpn-server[32201]: Exiting

    Read the article

  • Can't get .htaccess to work

    - by orokusaki
    I'm using Apache2 on Ubuntu Lucid Lynx. I have config set to use .htaccess like normal. This is my default site: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> I've tried lower case "all" (AllowOverride all) as well. My .htaccess file looks like this: //Rewrite all requests to www Options +FollowSymLinks RewriteEngine on RewriteCond %{HTTP_HOST} ^mydomain.com [nc] RewriteRule ^(.*)$ http://www.mydomain.com/$1 [r=301,nc] //301 Redirect "old_junk.html" File to "new_junk.html" Redirect 301 /old_junk.html /new_junk.html //301 Redirect Entire Directory "old_junk/" to "new_junk/" RedirectMatch 301 /old_junk/(.*) /new_junk//$1 // Copy and paste redirect examples from above: (with mydomain replaced with my actual domain... and my computer is plugged in)

    Read the article

  • rsync over ssh backup failing after relocation of server

    - by OlduvaiHand
    I've got two FreeBSD machines set up; one serves video data and the other is the backup for the first. At this point I've got around 4TB of data. I add files to the video server a few at a time, and was planning to use rsync over ssh to keep the backup machine up to date. I did the initial, large backup with both machines hooked up to the same subnet at the lab with no problems using rsync. Then, when I moved the backup machine off-site (but still on the university network), I attempted a sync without changing anything other than the IP (as the machine is now on a different subnet) and got the following error: 2010/03/22 15:55:21 [1260] rsync: connection unexpectedly closed (6340840244 bytes received so far) [receiver] 2010/03/22 15:55:21 [1260] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [receiver=3.0.7] 2010/03/22 15:55:21 [1258] rsync: connection unexpectedly closed (60 bytes received so far) [generator] 2010/03/22 15:55:21 [1258] rsync error: unexplained error (code 255) at io.c(601) [generator=3.0.7] The script that handles the backup hasn't been changed, nor has the crontab that invokes it. Does anyone have any ideas about what might be causing the hiccup? I was under the impression that it might have something to do with the ssh connection timing out or something along those lines, but am not entirely clear on how to diagnose the cause of the problem.

    Read the article

  • netsnmp - how to register string?

    - by user1495181
    I use net-snmp. I try to add my own mibs (no need in handler, just a MIB that i can get and set by snmp call), so i followed the scalar example. In order to add my own mibs i defined them in the mib file and create an agent extension.(see below). It work, so i have now an integer MIB. Now i want to add string mib, so i define the MIB , but i dont find a register API for string, like i have for the int - netsnmp_register_int_instance. I look in the includes file , but dosnt found matching one. agent: #include <net-snmp/net-snmp-config.h> #include <net-snmp/net-snmp-includes.h> #include <net-snmp/agent/net-snmp-agent-includes.h> #include "monitor.h" static int int_init = 0; /* default value */ void init_monitor(void) { oid open_connections_count_oid[] = { 1, 3, 6, 1, 4, 1, 8075, 1, 0 }; netsnmp_register_int_instance("open_connections_count", open_connections_count_oid, OID_LENGTH(open_connections_count_oid), &int_init, NULL); }

    Read the article

< Previous Page | 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769  | Next Page >