Search Results

Search found 14738 results on 590 pages for 'outlook settings'.

Page 545/590 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • Spaces and Parenthesis in windows PATH variable screws up batch files.

    - by NoName
    So, my path variable (System-Adv Settings-Env Vars-System-PATH) is set to: C:\Python26\Lib\site-packages\PyQt4\bin; %SystemRoot%\system32; %SystemRoot%; %SystemRoot%\System32\Wbem; %SYSTEMROOT%\System32\WindowsPowerShell\v1.0\; C:\Python26\; C:\Python26\Scripts\; C:\cygwin\bin; "C:\PathWithSpaces\What_is_this_bullshit"; "C:\PathWithSpaces 1.5\What_is_this_bullshit_1.5"; "C:\PathWithSpaces (2.0)\What_is_this_bullshit_2.0"; "C:\Program Files (x86)\IronPython 2.6"; "C:\Program Files (x86)\Subversion\bin"; "C:\Program Files (x86)\Git\cmd"; "C:\Program Files (x86)\PuTTY"; "C:\Program Files (x86)\Mercurial"; Z:\droid\android-sdk-windows\tools; Although, obviously, without the newlines. Notice the lines containing PathWithSpaces - the first has no spaces, the second has a space, and the third has a space followed by a parenthesis. Now, notice the output of this batch file: C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\>vcvars32.bat C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin>"C:\Program Files (x86 )\Microsoft Visual Studio 9.0\Common7\Tools\vsvars32.bat" Setting environment for using Microsoft Visual Studio 2008 x86 tools. \What_is_this_bullshit_2.0";"C:\Program was unexpected at this time. C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin> set "PATH=C:\Pro gram Files\Microsoft SDKs\Windows\v6.0A\bin;C:\Python26\Lib\site-packages\PyQt4\ bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\ WindowsPowerShell\v1.0\;C:\Python26\;C:\Python26\Scripts\;C:\cygwin\bin;"C:\Path WithSpaces\What_is_this_bullshit";"C:\PathWithSpaces 1.5\What_is_this_bullshit_1 .5";"C:\PathWithSpaces (2.0)\What_is_this_bullshit_2.0";"C:\Program Files (x86)\ IronPython 2.6";"C:\Program Files (x86)\Subversion\bin";"C:\Program Files (x86)\ Git\cmd";"C:\Program Files (x86)\PuTTY";"C:\Program Files (x86)\Mercurial";Z:\dr oid\android-sdk-windows\tools;" or specifically the line: \What_is_this_bullshit_2.0";"C:\Program was unexpected at this time. So, what is this bullshit? Specifically: Directory in path that is properly escaped with quotes, but with no spaces = fine Directory in path that is properly escaped with quotes, and has spaces but no parenthesis = fine Directory in path that is properly escaped with quotes, and has spaces and has a parenthesis = ERROR Whats going on here? How can I fix this? I'll probably resort to a junction point to let my tools still work as workaround, but if you have any insight into this, please let me know :)

    Read the article

  • Setting font size of Closed Captions on iPhone using ffmpeg or mencoder

    - by forthrin
    Does anyone know how to either: Make ffmpeg set subtitle font size in the output video file Make mencoder produce an iPhone-compatible video file (with subtitles) I finally found out how to get Closed Captions video on iPhone, with mkv and srt files as source material. The secret was using the mov_text subtitle codec in ffmpeg (and turning on Closed Captions in the iPhone settings of course): ffmpeg -y -i in.mkv -i in.srt -map 0:0 -map 0:1 -map 1:0 -vcodec copy -acodec aac -ab 256k -scodec mov_text -strict -2 -metadata title="Title" -metadata:s:s:0 language=eng out.mp4 However, the font size appears very small on the iPhone, and I can't find out how to set it with ffmpeg (the iPhone has no option for this). I found out that mencoder has a -subfont-text-scale option, but I don't have a lot of experience with this program. The following, my best attempt so far, produces an output file which is not playable on the iPhone. sudo port install mplayer +mencoder_extras +osd mencoder in.mkv -sub in.srt -o out.mp4 -ovc copy -oac faac -faacopts br=256:mpeg=4:object=2 -channels 2 -srate 48000 -subfont-text-scale 10 -of lavf -lavfopts format=mp4 PS! As requested, here is the output from mencoder: 192 audio & 400 video codecs success: format: 0 data: 0x0 - 0xb64b9d2f libavformat version 54.6.101 (internal) libavformat file format detected. [matroska,webm @ 0x1015c9a50]Unknown entry 0x80 [lavf] stream 0: video (h264), -vid 0 [lavf] stream 1: audio (ac3), -aid 0, -alang eng VIDEO: [H264] 1280x544 0bpp 49.894 fps 0.0 kbps ( 0.0 kbyte/s) [V] filefmt:44 fourcc:0x34363248 size:1280x544 fps:49.894 ftime:=0.0200 ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.23.100 (internal) AUDIO: 48000 Hz, 2 ch, s16le, 448.0 kbit/29.17% (ratio: 56000->192000) Selected audio codec: [ffac3] afm: ffmpeg (FFmpeg AC-3) ========================================================================== ** MUXER_LAVF ***************************************************************** REMEMBER: MEncoder's libavformat muxing is presently broken and can generate INCORRECT files in the presence of B-frames. Moreover, due to bugs MPlayer will play these INCORRECT files as if nothing were wrong! ******************************************************************************* OK, exit. videocodec: framecopy (1280x544 0bpp fourcc=34363248) VIDEO CODEC ID: 28 AUDIO CODEC ID: 15002, TAG: 0 Writing header... [mp4 @ 0x1015c9a50]Codec for stream 0 does not use global headers but container format requires global headers [mp4 @ 0x1015c9a50]Codec for stream 1 does not use global headers but container format requires global headers Then the following repeats itself for every frame: Pos: 0.0s 1f ( 2%) 0.00fps Trem: 0min 0mb A-V:0.000 [0:0] [mp4 @ 0x1015c9a50]malformated aac bitstream, use -absf aac_adtstoasc Error while writing frame. I recognize -absf aac_adtstoasc as an ffmpeg option (does mencoder spawn ffmpeg?), but I don't know how to pass this option on (my hunch is this is not even the origin of the problem).

    Read the article

  • PPTP VPN Not Working - Peer failed CHAP authentication, PTY read or GRE write failed

    - by armani
    Brand-new install of CentOS 6.3. Followed this guide: http://www.members.optushome.com.au/~wskwok/poptop_ads_howto_1.htm And I got PPTPd running [v1.3.4]. I got the VPN to authenticate users against our Active Directory using winbind, smb, etc. All my tests to see if I'm still authenticated to the AD server pass ["kinit -V [email protected]", "smbclient", "wbinfo -t"]. VPN users were able to connect for like . . . an hour. I tried connecting from my Android phone using domain credentials and saw that I got an IP allocated for internal VPN users [which I've since changed the range, but even setting it back to the initial doesn't work]. Ever since then, no matter what settings I try, I pretty much consistently get this in my /var/log/messages [and the VPN client fails]: [root@vpn2 ~]# tail /var/log/messages Aug 31 15:57:22 vpn2 pppd[18386]: pppd 2.4.5 started by root, uid 0 Aug 31 15:57:22 vpn2 pppd[18386]: Using interface ppp0 Aug 31 15:57:22 vpn2 pppd[18386]: Connect: ppp0 <--> /dev/pts/1 Aug 31 15:57:22 vpn2 pptpd[18385]: GRE: Bad checksum from pppd. Aug 31 15:57:24 vpn2 pppd[18386]: Peer armaniadm failed CHAP authentication Aug 31 15:57:24 vpn2 pppd[18386]: Connection terminated. Aug 31 15:57:24 vpn2 pppd[18386]: Exit. Aug 31 15:57:24 vpn2 pptpd[18385]: GRE: read(fd=6,buffer=8059660,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Aug 31 15:57:24 vpn2 pptpd[18385]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Aug 31 15:57:24 vpn2 pptpd[18385]: CTRL: Client 208.54.86.242 control connection finished Now before you go blaming the firewall [all other forum posts I find seem to go there], this VPN server is on our DMZ network. We're using a Juniper SSG-5 Gateway, and I've assigned a WAN IP to the VPN box itself, zoned into the DMZ zone. Then, I have full "Any IP / Any Protocol" open traffic rules between DMZ<--Untrust Zone, and DMZ<--Trust Zone. I'll limit this later to just the authenticating traffic it needs, but for now I think we can rule out the firewall blocking anything. Here's my /etc/pptpd.conf [omitting comments]: option /etc/ppp/options.pptpd logwtmp localip [EXTERNAL_IP_ADDRESS] remoteip [ANOTHER_EXTERNAL_IP_ADDRESS, AND HAVE TRIED AN ARBITRARY GROUP LIKE 5.5.0.0-100] Here's my /etc/ppp/options.pptpd.conf [omitting comments]: name pptpd refuse-pap refuse-chap refuse-mschap require-mschap-v2 require-mppe-128 ms-dns 192.168.200.42 # This is our internal domain controller ms-wins 192.168.200.42 proxyarp lock nobsdcomp novj novjccomp nologfd auth nodefaultroute plugin winbind.so ntlm_auth-helper "/usr/bin/ntlm_auth --helper-protocol=ntlm-server-1" Any help is GREATLY appreciated. I can give you any more info you need to know, and it's a new test server, so I can perform any tests/reboots required to get it up and going. Thanks a ton.

    Read the article

  • Unecrypted Image of Truecrypt-Encrypted System Partition

    - by Dexter
    The general tenor around the internet seems to be that you can't create images of system partitions that have been encrypted (with truecrypt) other than with dd or similar sector-by-sector copy tools. These files however are very impractical given their size (and are obviously incompressible) which makes keeping multiple states/backups of your system partition rather expensive (..especially considering current hdd prices). The problem is that backup tools (like Acronis True Image, Clonezilla, etc.) won't give you the option to create an image of (mounted/opened) Truecrypt partitions, or that there is no recovery environment for restoring the backup, that would allow to run truecrypt before doing any actual restoring. After some trial and error however, I believe I have found a very simple way. Since Truecrypt (running in Linux) creates a virtual block device, that it uses for mounting the unencrypted partitions into the file system, partclone can be used for creating/restoring images. What I did: boot up a linux live disk mount/open the drive/device/partition in truecrypt unmount the filesystem mount point again, like so: umount /media/truecryptX ("X" being the partition number assigend by truecrypt) use partclone (this is what clonezilla would do too, except that clonezilla only offers you to back up real drive partitions, not virtual block devices): partclone.ntfs -c -s /dev/mapper/truecryptX -o nameOfBackupFile for restoring steps 1-3 remain the same, and step 4 is partclone.ntfs -r -s nameOfBackupFile -o /dev/mapper/truecryptX A backup and test-restore of the system (with this method) seems to have worked fine (and the changed settings were reverted to the backup-state). The backup file is ~40 GB (and compressible down to <8GB with 7zip/LZMA2 on the "fast" setting). I can't quite believe that I'm the only one that wants to create images of encrypted drives, but doesn't want to waste 100GB on the backup of one single system state. So my question now is, given how simple this was, and that no one seems to mention anywhere that this is possible - did I miss something? or did I do something wrong? Is there any situation that I didn't think of where this method will fail? Obviously, the backup file needs to be stored in some other encrypted place in order to still remain confidential, since it is unencrypted. Also, in order to do a full "bare metal" restore, one would have to actually first (re-)install Windows, encrypt it, and only then restore the backup file. The funny thing however is that you won't need to backup any partition tables, etc. since the reinstall will effectively take care of that. Is there anything else? This is imho still a lot better than having sector-by-sector images..

    Read the article

  • Possible capacitor plague - need help identifying

    - by cornjuliox
    I've been having some PC power issues lately, and I think I've tracked it down to a bad power supply. Lately, when I'm on my PC it will often restart without warning, displaying "Hypertransport sync flood error occurred last boot." once POST finishes. I've googled the error, but can't come to a definitive conclusion as to what's causing it. I've seen posts suggesting that it might be a power supply issue, but nothing conclusive. Here's what I've done so far: -I haven't installed anything suspect within the last 3 months. -I do overclock just a tiny bit, so I tried raising the voltages a little. That didn't work so I brought both CPU multiplier and voltages all back to their default settings, but that didn't solve the problem either. The problem still occurs. -AV scanned the whole system, nothing suspect. -I suspected that it might be a bad power supply so I cracked that open and found the following: I think it might be cap plague, but I'm not sure. It looks more like glue TBH. Could someone help me figure out what might be wrong with this PC? EDIT: Sometimes, after these restarts, I noticed that the GPU fan doesn't spin up, and the single rear case fan that just happens to be connected to the same molex Y-cable as the GFX card doesn't spin up either. Anything to that? EDIT 2: I do use the system quite heavily, but I don't know how that will factor into this. I often play Diablo 3 and EVE Online at the same time, frequently alt-tabbing between the two. I also have Firefox open in the background, sometimes with several tabs, and if I feel like it, I'll mute the in-game sound and open foobar2000 for better music. Could it be that I'm just pushing this thing too hard? EDIT 3: I also noticed something odd. Right before I experience these restarts, my monitor would suffer from very faint lines of static moving across the screen. The monitor is still very much useable, but it is very annoying. Following the restart it disappears, and then would gradually re-appear over the next few days, and then restarts again. I find it to be very odd. System specs for good measure: Orion 600 W PSU AMD Athlon II X3 440 (overclocked to 3.14 ghZ, raised the CPU multiplier to x13 from x10) MSI G40-775 motherboard 1 GB inno3D GTX 550 ti 4 GB DDR3 RAM 500 GB Samsung SATA HD

    Read the article

  • Finding underlying cause of Window 7 Account corruption.

    - by Carl Jokl
    I have been having trouble with my Sister's computer which I built. It is running Windows 7 Ultimate x64. The problem is that I have had problems with the accounts becoming corrupted. First problems manifest themselves in the form of Windows saying the profile failed to be loaded properly and a temporary profile. Eventually the account will not allow login at all. An error message along the lines the authentication service failing the login. I have found information about this problem and how to fix it. The problem being that something has corrupted the account profile and backing up and recreating the accounts fixes the problem. I have been able to fix things and get logins working again but over the period of usually about a week it happens again. Bit by bit the accounts corrupt and then it is back to square one. I am frustrated because I don't know what the underlying cause of the problem is i.e. what is causing the accounts to be corrupted in the first place. At the moment I am just treating the symptoms. I was hoping someone who may have more experience with dealing with this problem might be able to help me find the root cause. Some articles suggest that Norton Internet Security is a big culprit of this problem which is installed. I could try uninstalling Norton and see if it helps. The one thing which is different about this computer to any other I have built is that it has a solid state drive. Actually it has both a hard drive and solid state drive. The documents and settings i.e. the Users directory is stored on the hard drive. This was done following an article about moving the user account data onto a separate drive on Windows 7 which I found on the Internet. Moving the User accounts is more of a pain under Windows 7 and this solution involved creating a low level file system link to the folder from the boot drive (Solid State) to the Hard Drive. The idea is that the computer behaves just as if it is accessing the User's folder from the boot drive but actually the data is stored on the hard drive. This may have nothing to do with the cause of the problem but due to the problem being user account corruption it is a possibility I have not been able to rule out. Any help would be appreciated as I would be glad to see the back of this problem.

    Read the article

  • Linux filesystem with inodes close on the disk

    - by pts
    I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it? As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek. Here are some solutions I had in mind, none of which I am satisfied with: Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive. Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)? Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print. Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case. Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that? Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?

    Read the article

  • How can I get the Terminal raster font to display alt codes in a text editor?

    - by grg-n-sox
    I am working on a project that includes making some ASCII art, except it isn't true ASCII art since I am using a far amount of Windows Alt codes to make it. Anyways, I wanted to make sure that as I am working on it, that it looks exactly how it will in a windows command prompt terminal session. So since command prompt defaults to the Terminal raster font, I figured I would use that. But I quickly noticed that when I use the Terminal typeface in a text editor, it will not render ASCII codes, either at all (as is the case most of the time) or incorrectly. Now, I understand if a font just doesn't support non-ASCII characters, but what I don't get is how the characters do show up correctly in command prompt when they don't in a text editor. I checked the output of the 'chcp' and it was set to 437 by default, which is what I need. Well, either that or 850 but preferably 437 since they got rid of some of the graphics in 437 and replaced them with other Latin characters. Command prompt terminal settings show I am using the Terminal raster font with a 8x12 glyph size. So I try using size 12 in the text editor but no good, even after switching the text encoding to either MS-DOS OEM-US (supposedly an alternative name for CP437) or UTF-8. I just don't get how I am not getting the characters to show up. Also, if it helps, the art I am making is basically modified screen shots from a game I play called Dwarf Fortress that uses characters from the Terminal/Curses typeset, or at least that is how it is reported in the forums by those who make graphics sets to replace the default character set. However, the game doesn't actually use the system's Terminal font. The game's data files includes a bitmap image that is a grid of all the characters the game uses. So it uses this bitmap to render graphics instead of the actual font file. And I basically want to get a text editor to make it so if I type up some ASCII art to look like a screenshot from Dwarf Fortress, that it will actually look like Dwarf Fortress other than the lack of color. Any help?

    Read the article

  • Wireless traffic stops when downloading large files at high speed: packets lost (Linksys WRT120N router)

    - by Torious
    The problem Note: First I'd like to understand WHY this is happening. Ofcourse, a solution would be nice too. :) When downloading a large file over HTTP at high-speeds, my wireless traffic basically stops: I can't open webpages and the download itself pauses. It pauses pretty much immediately after starting it; sometimes at 800 KB, sometimes at a few MB. After some time, the download (and other traffic) resumes, but the problem keeps reoccurring during the same download. The problem does not occur when using a wired connection through the same router (Linskys WRT120N). Also note that the connection is not dropped when this happens. It's just that the traffic stops and I can't browse to web pages, etc. (SYN packets are sent but nothing is received, etc.) Inspection with Wireshark shows that the following happens: Server sends data packets which are acknowledged by client Server sends a packet, but SEQ indicates some packets were lost (6 packets in one occurrence). Server sends a few more packets and client acknowledges these using "selective acknowledgement" Server stops sending data for a while (since the lost packets were not acknowledged or the router stops forwarding them?) Eventually, server does a "retransmission" and traffic resumes as normal. This all seems normal behavior to me when packet loss occurs. It's the consistent packet loss throughout a large, high-speed download that puzzles me. What might cause this? My own idea is the following: My internet is pretty fast (100 mbps), so when starting a large-file download, the router buffers the incoming data (since wireless introduces some slight delay / lower speed, in part due to other networks), but the buffer overflows and the router drops packets to regulate traffic (and because it has no choice). But how could that happen? Doesn't the TCP window size limit the amount of data that can go unacknowledged? So how can the router's buffer overflow if there can only be like 64 KB waiting to be acknowledged? Note: I've disabled TCP window scaling and dynamic window size through netsh options, in an attempt to fix this, but it doesn't seem to matter. Also, Wireshark shows a pattern of the server sending 2 packets (of 1514 bytes) and the client sending an ACK, so does that rule out a possible buffer overflow? And a few more subsequent packets are received... I'm at a loss here. Thanks for any insights. Things that are (probably) NOT the cause / I have experimented with The browser Various TCP options in Windows 7 (netsh etc.) Router settings such as MTU, beacon interval, UPnP, ...

    Read the article

  • Configure vlan on Netgear switch via SNMP

    - by Russell Gallop
    I am trying to configure vlans on a netgear GS752TSX from the Linux command line with netsnmp. I have created vlan 99 on the web interface now want to control the pvid settings, egress and tagging. I have identified these as the MIBs I need to change: dot1qPvid.<port> dot1qVlanStaticEgressPorts.99 dot1qVlanStaticUntaggedPorts.99 Pvid works as I expect: $ snmpset -r 1 -t 20 -v 2c -c private <switch> dot1qPvid.17 u 99 Q-BRIDGE-MIB::dot1qPvid.17 = Gauge32: 99 $ snmpget -r 1 -t 20 -v 2c -c private <switch> dot1qPvid.17 Q-BRIDGE-MIB::dot1qPvid.17 = Gauge32: 99 and so do the egress ports: $ snmpset -r 1 -t 20 -v 2c -c private <switch> dot1qVlanStaticEgressPorts.99 x 'ff ff ff ff ff ff 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00' Q-BRIDGE-MIB::dot1qVlanStaticEgressPorts.99 = Hex-STRING: FF FF FF FF FF FF 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 $ snmpget -r 1 -t 20 -v 2c -c private <switch> dot1qVlanStaticEgressPorts.99 Q-BRIDGE-MIB::dot1qVlanStaticEgressPorts.99 = Hex-STRING: FF FF FF FF FF FF 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 But untagging the ports doesn't seem to remember my setting: $ snmpset -r 1 -t 20 -v 2c -c private <switch> dot1qVlanStaticUntaggedPorts.99 x 'ff ff ff ff ff ff 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00' Q-BRIDGE-MIB::dot1qVlanStaticUntaggedPorts.99 = Hex-STRING: FF FF FF FF FF FF 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 $ snmpget -r 1 -t 20 -v 2c -c private <switch> dot1qVlanStaticUntaggedPorts.99 Q-BRIDGE-MIB::dot1qVlanStaticUntaggedPorts.99 = Hex-STRING: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 I have tried netsnmp 5.4.1 and 5.7.2. Is there something I'm doing wrong?

    Read the article

  • Mercurial browser on Windows 2003 takes several refreshes before displaying repositories

    - by Tim Murphy
    When attempt to browse my Mercurial repositories it usually takes several refreshes before the repository list is displayed. The configuration is as follows: Windows Server 2003 (Dedicated machine hosted by http://www.server4you.com/. Site has anonymous password protection with self-signed SSL. Mercurial 1.5.3 Python 2.6.5 Python for Windows 32 extensions 214 py2.6 isapi-wsgi 0.4.2 The repositories are being served via ISAPI using the standard hgwebdir_wspi.py file (copy to follow). Other problems with the repository server: Before doing a clone/push/etc I have to browse the repositories first otherwise hg on my local machine can not locate the site. I have one a repository with a large changeset that after a minute or so throw error "abort: error: An existing connection was forcibly closed by the remote host". Will be asking another question for this problem. What can I do to start tracking down this problem? hgwebdir_wsgi.py # Configuration file location hgweb_config = r'C:\Public\Mercurial\WebSite\hgweb.config' # Global settings for IIS path translation path_strip = 0 # Strip this many path elements off (when using url rewrite) path_prefix = 0 # This many path elements are prefixes (depends on the # virtual path of the IIS application). import sys # Adjust python path if this is not a system-wide install #sys.path.insert(0, r'c:\path\to\python\lib') # Enable tracing. Run 'python -m win32traceutil' to debug if hasattr(sys, 'isapidllhandle'): import win32traceutil # To serve pages in local charset instead of UTF-8, remove the two lines below import os os.environ['HGENCODING'] = 'UTF-8' import isapi_wsgi from mercurial import demandimport; demandimport.enable() from mercurial.hgweb.hgwebdir_mod import hgwebdir # Example tweak: Replace isapi_wsgi's handler to provide better error message # Other stuff could also be done here, like logging errors etc. class WsgiHandler(isapi_wsgi.IsapiWsgiHandler): error_status = '500 Internal Server Error' # less silly error message isapi_wsgi.IsapiWsgiHandler = WsgiHandler # Only create the hgwebdir instance once application = hgwebdir(hgweb_config) def handler(environ, start_response): # Translate IIS's weird URLs url = environ['SCRIPT_NAME'] + environ['PATH_INFO'] paths = url[1:].split('/')[path_strip:] script_name = '/' + '/'.join(paths[:path_prefix]) path_info = '/'.join(paths[path_prefix:]) if path_info: path_info = '/' + path_info environ['SCRIPT_NAME'] = script_name environ['PATH_INFO'] = path_info return application(environ, start_response) def __ExtensionFactory__(): return isapi_wsgi.ISAPISimpleHandler(handler) if __name__=='__main__': from isapi.install import * params = ISAPIParameters() HandleCommandLine(params) hgweb.config [paths] / = C:\Public\Mercurial\Repositories\* [web] allow_archive = bz2 gz zip ; Allows archive downloads. allow_push = ######## ; Users that are allowed to push.

    Read the article

  • Samba share not accessible from Win 7 - tried advice on superuser

    - by Roy Grubb
    I have an old Red Hat Linux box that I use, amongst other things, to run Samba. My Vista and remaining Win XP PC can access the p/w-protected Samba shares. I just set up a new Windows 7 64-bit Pro PC. Attempts to access the Samba shares by clicking on the Linux box's icon in 'Network' from this machine gave a Logon failure: unknown user name or bad password. message when I gave the correct credentials. So I followed the suggestions in Windows 7, connecting to Samba shares (also checked here but found LmCompatibilityLevel was already 1). This got me a little further. If click on the Linux box's icon in 'Network' from this machine I now see icons for the shared directories. But when I click on one of these, I get \\LX\share is not accessible. You might not have permission... etc. I tried making the Win 7 password the same as my Samba p/w (the user name was already the same). Same result. The Linux box does part of what I need for ecommerce - the in-house part, it's not accessible to the Internet. As my Linux Fu is weak, I have to avoid changes to the Linux box, so I'm hoping someone can tell me what to do to Win 7 to make it behave like XP and Vista when accessing this share. Help please!? Thanks Thanks for replying @Randolph. I had set 'Network security: LAN Manager authentication level' to Send LM & NTLM - use NTLMv2 session security if negotiated based on the advice in Windows 7, connecting to Samba shares and had restarted the machine, but that didn't work for me. I'll try playing with other Network security values. I have now tried the following: Network security: Allow Local System to use computer identity for NTLM: changed from Not Defined to "Enabled". Restarted machine Still says "\LX\share is not accessible. You might not have permission..." etc. Network security: Restrict NTLM: Add remote server exceptions for NTLM Authentication (added LX) Restarted machine Still says "\LX\share is not accessible. You might not have permission..." etc. I can't see any other Network security settings that might affect this. Any other ideas please? Thanks Roy

    Read the article

  • pam_unix(sshd:session) session opened for user NOT ROOT by (uid=0), then closes immediately using using TortiseSVN

    - by codewaggle
    I'm having problems accessing an SVN repository using TortoiseSVN 1.7.8. The SVN repository is on a CentOS 6.3 box and appears to be functioning correctly. # svnadmin --version # svnadmin, version 1.6.11 (r934486) I can access the repository from another CentOS box with this command: svn list svn+ssh://[email protected]/var/svn/joetest But when I attempt to browse the repository using TortiseSVN from a Win 7 workstation I'm unable to do so using the following path: svn+ssh://[email protected]/var/svn/joetest I'm able to login via SSH from the workstation using Putty. The results are the same if I attempt access as root. I've given ownership of the repository to USER:USER and ran chmod 2700 -R /var/svn/. Because I can access the repository via ssh from another Linux box, permissions don't appear to be the problem. When I watch the log file using tail -fn 2000 /var/log/secure, I see the following each time TortiseSVN asks for the password: Sep 26 17:34:31 dev sshd[30361]: Accepted password for USER from xx.xxx.xx.xxx port 59101 ssh2 Sep 26 17:34:31 dev sshd[30361]: pam_unix(sshd:session): session opened for user USER by (uid=0) Sep 26 17:34:31 dev sshd[30361]: pam_unix(sshd:session): session closed for user USER I'm actually able to login, but the session is then closed immediately. It caught my eye that the session is being opened for USER by root (uid=0), which may be correct, but I'll mention it in case it has something to do with the problem. I looked into modifying the svnserve.conf, but as far as I can tell, it's not used when accessing the repository via svn+ssh, a private svnserve instance is created for each log in via this method. From the manual: There's still a third way to invoke svnserve, and that's in “tunnel mode”, with the -t option. This mode assumes that a remote-service program such as RSH or SSH has successfully authenticated a user and is now invoking a private svnserve process as that user. The svnserve program behaves normally (communicating via stdin and stdout), and assumes that the traffic is being automatically redirected over some sort of tunnel back to the client. When svnserve is invoked by a tunnel agent like this, be sure that the authenticated user has full read and write access to the repository database files. (See Servers and Permissions: A Word of Warning.) It's essentially the same as a local user accessing the repository via file:/// URLs. The only non-default settings in sshd_config are: Protocol 2 # to disable Protocol 1 SyslogFacility AUTHPRIV ChallengeResponseAuthentication no GSSAPIAuthentication yes GSSAPICleanupCredentials yes UsePAM yes AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE AcceptEnv XMODIFIERS X11Forwarding no Subsystem sftp /usr/libexec/openssh/sftp-server Any thoughts?

    Read the article

  • Kernel-mode Authentication: 401 errors when accessing site from remote machines

    - by CJM
    I have several Classic ASP sites that use Integrated Windows Authentication and Kerberos delegation. They work OK on the live servers (recently moved to a Server 2008/IIS7 servers), but do not work fully on my development PC or my development server. The IIS on both machines were configured through an IIS web deployment tool package which was exported from an old machine; the deployment didn't work perfectly, and I had to tinker a bit to get the sites working. When accessing the apps locally on either machine, they work fine; when accessing from another machine, the user is prompted by a username/password dialog, and regardless of what you enter, ultimately it results in a 401 (Unauthorised) error. I've tried comparing the configuration of these machines against similar live servers (that all work fine), and they seem generally comparable (given that none of the live servers are yet on IIS7.5 (Windows 7/Server 2008 R2). These applications run in a common application pool which uses a special domain user as it's identity - this user has similar permissions on the live and development machines. On IIS6 platforms, to enable kerberos delegation, I needed to set up some SPNs for this user, and they are still in place (even though I don't believe they are needed any longer for IIS7+ due to kernel-mode authentication), Furthermore, this account is enabled for Kerberos delegation in Active Directory, as is each machine I am dealing with. I'm considering the possibility that the deployment might have made changes/failed to make changes to the IIS configuration thus causing this problem. Perhaps a complete rebuild (minus another web deployment attempt) would solve the problem, but I'd rather fix (thus understand) the current problem. Any ideas so far? I've just had another attempt at fixing this issue, and I've made some progress, but I don't have a complete fix...yet. I've discovered that if I access the sites via IP address (than via NetBIOS name), I get the same dialog, except that it accepts my credentials and thus the application works - not quite a fix, but a useful step. More interestingly, I discovered that if I disable Kernel-mode authentication (in IIS Manager Website Authentication Advanced Settings), the applications work perfectly. My foggy understanding is that this is effectively working in the pre-IIS7 way. A reasonable short-term solution, but consider the following explicit advice from IIS on this issue: By default, IIS enables kernel-mode authentication, which may improve authentication performance and prevent authentication problems with application pools configured to use a custom identity. As a best practice, do not disable this setting if Kerberos authentication is used in your environment and the application pool is configured to use a custom identity. Clearly, this is not the way my applications should be working. So what is the issue?

    Read the article

  • Port forwarding using a BT Home Hub 2.0 (Supplied to new BT Infinity Customers in the UK)

    - by Jasarien
    I don't usually have trouble with port forwarding, I've been able to do it successfully on a number of different routers, including Linksys, Belkin, Netgear and Apple (Time Capsule / Airport Extreme). So I'm quite confused here. I had been using my Apple Time Capsule as my router for a few years now, with several port mappings all working fine. But it died recently, so I've had to resort to using the BT Home Hub 2.0 that was supplied with my BT Infinity broadband subscription. The forwarding interface for the Home Hub is simplified for the most part, allowing you to select an application or game and assign it to a particular computer on the network which you choose from a list that the Home Hub has 'discovered'. My Mac Pro has a manually assigned static IP 192.168.1.4 and my router is static at 192.168.1. I have chosen SSH from the list of applications and assigned it to my Mac Pro (the only computer in the list currently). The Home Hub also has a feature to keep a DNS service updated, and I have set it to keep my external IP address updated on my hostname. This is how I had it setup in the past with other routers and not had trouble before. I am able to ping my hostname (and external IP) from outside the network and get a response. But when I try to connect using SSH, the connection times out. The Home Hub also has "Firewall settings". The currently selected setting is: Default: Allow all outgoing connections and block all incoming traffic. Games and application sharing is allowed. But I've tried changing it to: Disabled: All traffic is allowed to pass through your BT Home Hub to your devices. Note: you’ll still need to use the games and application sharing feature to make sure that certain applications work properly. And the connection still times out... So frustrating. The OS X firewall on my Mac is disabled, so I don't think that's in the way. I have tried setting the port forwarding manually, instead of relying on the preset "SSH" option (incase it's not using the port I expect). So I set up my own "application" (as the Home Hub calls it) and forwarded external port 22 TCP to internal port 22 TCP to 192.168.1.4 - but that just gives the same result - unable to connect. Next, with the router's firewall disabled and OS X's firewall disabled, I ran the Shields Up test (https://www.grc.com/x/ne.dll?bh0bkyd2) and the result was that all my service ports (0 - 1055) are in 'Stealth' mode. I.e. nothing even exists at my IP as far as any outsider is concerned... Strange. The only thing that seems to work is setting my Mac Pro as the DMZ - which I don't want to do for obvious reasons. Any help with this would be extremely appreciated, thanks.

    Read the article

  • How do I troubleshoot an IPsec tunnel (from a cellular router to a public server)?

    - by Hanno Fietz
    I'm new to IPsec and struggling with a setup that might soon be widely used in our operations (provided I do understand it, eventually...). A cellular router (blackbox by netModule, from its log messages it seems to be running Linux and OpenSwan) connects a sensor network on customers' sites with our public server. We need to be able to connect into the local network, so I had the cell provider give me a public IP (a dynamic one). The way their setup works, the public IPs only allow IPsec traffic. I set up OpenSwan on our Ubuntu server (running Jaunty). This is my connection config from /etc/ipsec.conf: conn gprs-field-devices left=my.pub.lic.ip [email protected] #leftsubnet=192.168.1.129/25 right=%any [email protected] #rightsubnet=192.168.1.1/25 #rightnexthop=%defaultroute auto=add On the router, all I have is the Web UI, in which I made the following settings: "Remote endpoint": public IP of server, same as "left" above "Local Network Address": 192.168.1.1 "Local Network Mask": 255.255.255.128 "Remote Network Address": 192.168.1.129 "Remote Network Mask": 255.255.255.128 The pluto process on the server is listening for connections on port 500. It can't open a tunnel, obviously, because it doesn't know at which IP the client is. I set up a passphrase as PSK for @field.econemon.com in /etc/ipsec.secrets and also configured it in the router (which doesn't seem to support certificates). My problem is, nothing happens. The router just says, IPsec is "down". When I copy-paste the IP into ipsec.conf (for "right="), and ask the server to ipsec auto --up gprs-field-devices, it just hangs until I press Ctrl-C. Is there anything wrong with my setup? How can I debug this further? My router gives the following loglines that seem related, but don't tell me anything: Feb 21 23:08:20 Netbox authpriv.warn pluto[2497]: loading secrets from "/etc/ipsec.secrets" Feb 21 23:08:20 Netbox authpriv.warn pluto[2497]: loading secrets from "/etc/ipsec.d/hostkey.secrets" Feb 21 23:08:20 Netbox authpriv.warn pluto[2497]: loading secrets from "/etc/ipsec.d/netbox0.secrets" Feb 21 23:08:20 Netbox authpriv.warn pluto[2497]: "netbox00" #1: initiating Main Mode Feb 21 23:08:20 Netbox daemon.err ipsec__plutorun: 104 "netbox00" #1: STATE_MAIN_I1: initiate Feb 21 23:08:20 Netbox daemon.err ipsec__plutorun: ...could not start conn "netbox00" Feb 21 23:08:22 Netbox authpriv.warn pluto[2497]: packet from 188.40.57.4:500: ignoring informational payload, type NO_PROPOSAL_CHOSEN Feb 21 23:08:22 Netbox authpriv.warn pluto[2497]: packet from 188.40.57.4:500: received and ignored informational message Feb 21 23:08:28 Netbox user.warn parrot.system_controller[762]: IPSECCTRLR: Tunnel 0 is down for 0 seconds Feb 21 23:08:40 Netbox user.warn parrot.system_controller[762]: IPSECCTRLR: Tunnel 0 is down for 10 seconds Feb 21 23:08:52 Netbox authpriv.warn pluto[2497]: packet from 188.40.57.4:500: ignoring informational payload, type NO_PROPOSAL_CHOSEN

    Read the article

  • Scientific Linux - mysql and apache fail to start on reboot

    - by Derek Deed
    Both mysqld and httpd fail to restart following a reboot of the server, although chkconfig --list shows both daemons set to on for run levels 2,3,4 & 5 All control is being exectuted via Webmin Reboot server – MySQl and Apache not running MySQL Database Server MySQL version 5.1.69 MySQL is not running on your system - database list could not be retrieved. ________________________________________ Click this button to start the MySQL database server on your system with the command /etc/rc.d/init.d/mysqld start. This Webmin module cannot administer the database until it is started. Apache Webserver Apache version 2.2.15 Start Apache Search Docs.. Global configuration Existing virtual hosts Create virtual host Select all. | Invert selection. Default Server Defines the default settings for all other virtual servers, and processes any unhandled requests. Address Any Port Any Server Name Automatic Document Root /var/www/drupal Virtual Server Processes all requests on port 443 not handled by other virtual servers. Address Any Port 443 Server Name Automatic Document Root /var/www/drupal Select all. | Invert selection. chkconfig --list mysqld mysqld 0:off 1:off 2:on 3:on 4:on 5:on 6:off chkconfig --list httpd httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off Manually Restart Apache chkconfig --list httpd httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off Manually Restart MySQL chkconfig --list mysqld mysqld 0:off 1:off 2:on 3:on 4:on 5:on 6:off Everything now running okay; but no difference in the chkconfig outputs above. I tried: chkconfig --levels 235 httpd on /etc/init.d/httpd start and the same for mysqld but no change in operation. Log files show that the shutdown has been completed successfully; but there is no indication of the service restarting until it is executed manually: 131112 13:59:15 InnoDB: Starting shutdown... 131112 13:59:16 InnoDB: Shutdown completed; log sequence number 0 881747021 131112 13:59:16 [Note] /usr/libexec/mysqld: Shutdown complete 131112 13:59:16 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended 131112 14:09:52 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 131112 14:09:52 InnoDB: Initializing buffer pool, size = 8.0M 131112 14:09:52 InnoDB: Completed initialization of buffer pool And the Apache logs: [Tue Nov 12 13:59:13 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Nov 12 13:59:13 2013] [notice] Digest: generating secret for digest authentication ... [Tue Nov 12 13:59:13 2013] [notice] Digest: done [Tue Nov 12 13:59:14 2013] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.3.3 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations [Tue Nov 12 13:59:14 2013] [notice] caught SIGTERM, shutting down [Tue Nov 12 14:27:13 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Nov 12 14:27:13 2013] [notice] Digest: generating secret for digest authentication ... [Tue Nov 12 14:27:13 2013] [notice] Digest: done [Tue Nov 12 14:27:13 2013] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.3.3 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations Is anyone able to shed any light on this problem?

    Read the article

  • How to recover a MySQL InnoDB table?

    - by Kau-Boy
    When I try to launch the Plesk administration page of you server I get the following error: ERROR: PleskMainDBException MySQL query failed: MySQL server has gone away The MySQL Server is working well. Although it seems that the plesk database is somehow corrupt and any action on this database results in a restart of the mysql process, so even queries to other databases on the same MySQL server will be lost. If I try to connect to the plesk database using phpMyAdmin, I can only see the number of tables, the database had originally. But I am not able to open the tables listing. As soon as I try it, the mysql process crashes again. Trying to connect to the database using ssh works. I can even run a SELECT statement against any table an get a result. I don't know if it is an plesk error or an error of the psa database or even the MySQL server. Can you give me any tips on how to recover the plesk system. Should I try to repair the Plesk installation before. And if so, how can I do it and will all my settings get lost doing it? EDIT: Trying to dump the database, I get the following error: mysqldump: Got error: 2013: Lost connection to MySQL server during query when using LOCK TABLES EDIT: I could find out, that the table 'data_bases' is responsible for the crash of the MySQL server process. But trying to repair it using a REPAIR TABLE statement doesn't work. EDIT: I now dropped the whole database and restored it from a dump. But why I try to recover the data_bases table I get the following error: ERROR 1005 (HY000) at line 24: Can't create table './psa/data_bases.frm' (errno: 121) I am not able to create the table again. Somewhere in the MySQL system there is still some information about this table. I tested the same thing locally. If I just delete the table files and then try to create the table again I get the same error. If I drop the table through MySQL, I can create the table again afterwards. But trying to drop the table using MySQL the whole MySQL system crashed. Is there any way to solve that issue?

    Read the article

  • Cygwin's RSYNC for large data transfer

    - by Tim Brigham
    I'm using rsync from Cygwin to do a large scale data transfer from an aging HP MSA 1000 to a new DAS attached to a different server. I have a daemon running on the remote server in read only mode and a local copy writing the files to disk. One of my servers is an image repository with over a million files spread across about 300 directories. Each file averages only a couple hundred kilobytes. More so than any other box this one is proving problematic. The rsync process will work for a while - some times 20 minutes, some times an hour - and then it simply quits and sits idle at a given file name. I have verified that the file isn't corrupt on the remote server and that the file is successfully created on the local drive. I ran the rsync client in -vv mode, which returns nothing. I checked out the logs created by the daemon. I looked at the network utilization on the interface, which is sitting idle. I looked at the AV settings to see if anything could pose a problem there. I even updated to the latest release of Cygwin. What do I need to in order to keep this connection up? EDIT: The client system is using the command rsync.exe server::Drives/f/Repo/ /cygdrive/T/Repo --archive -P -vv The server is using the command rsync.exe --daemon --no-detach --config "rsyncd.conf" The contents of rsyncd.conf: use chroot = false strict modes = false hosts allow = 192.168.100.9 log file = c:/rsyncd.log uid=0 gid=0 [Drives] path = /cygdrive read only = yes EDIT: The file server is 2003, the disk type on the array is GPT and the size is of the array is about 4 TB. EDIT: Stranger.. It looks like the process is reliably erroring out at about 175,000 files. Rsync runs fine when I pick the same directory it has problems with one at a time. EDIT: rsync version 3.0.9 protocol version 30 Copyright (C) 1996-2011 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints, no socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace, append, ACLs, xattrs, iconv, symtimes A similar failure occurred when going from the same set of files with Cygwin to a Linux install. It didn't happen until several hours later than normal however.

    Read the article

  • Running an rsync sweep before initializing lsyncd for synchronizing instances on EC2

    - by chrisallenlane
    My company uses several EC2 servers that will scale up and down according to the load we're receiving on our sites at any given moment. For the sake of our discussion here, we're running four instances: master.ourdomain.com - the file syncing "hub" of the webservers www1/www2/www3.ourdomain.com - three webservers which turn on or off as dictated by load I'm using lsyncd to keep all of the webservers in sync, and for the most part, it's working quite well. We're using a two-way syncing scheme, such that each webserver syncs against master, and master syncs against each webserver. Thus, the webservers are kept in sync, even though they aren't syncing against each other directly. I'm having one problem that I'm having a hard time solving,though. It occurs under these circumstances: When changes are made on master (perhaps after we've pushed new code), while some of the redundant webservers are sleeping And then a sleeping webserver wakes-up to absorb load Under that circumstance, I would like the following to happen: First, the newly-awoken webserver should sync its file structure - one way - against master, to bring its web application code up-to-date. Then, and only then, should it begin pushing changes in its file structure back to master. Unfortunately, currently, when a sleeping server is started, when lsyncd starts up, it pushes changes back to master before updating its own codebase, thus overwriting new code with old. Thus, before lsyncd starts, I'd like to be able to synchronize the webservers code against master's, perhaps by running a simple one-way rsync against the two machines. We're running lsyncd v.2, and I've tried to make this happen by using the "bash" configuration options documented in the lsyncd manual. My configuration file looks like this: settings = { logfile = "/home/user/log/lsyncd/log.txt", statusFile = "/home/user/log/lsyncd/status.txt", maxProcesses = 2, nodaemon = false, } bash = { onStartup = "rsync [email protected]:/home/user/www /home/user/www" } sync{ default.rsyncssh, source="/home/user/www/", host="[email protected]", targetdir="/home/user/www/", rsyncOpts="-ltus", excludeFrom="/home/user/conf/lsyncd/exclude" } (I've obviously redacted that file somewhat to protect the identities of the guilty.) Simply put, though, this just isn't working. How else might I approach this problem? I was looking at the --delete-after option in man rsync, but I don't think that does what I'm looking for. Are there any suggestions about how I should approach this problem? Thanks for lending your time and expertise. Chris

    Read the article

  • DVD playback with Windows Media Player 11 works fine, but when copied to HDD and then played back, t

    - by stakx
    I have several DVDs with short documentaries on it. Since the notebook I'm using (a Dell Latitude E6400) has only one DVD drive, and I might play back those short movies very often, I thought of copying them to the HDD and playing them back from there. However, I've run into a problem, namely stuttering audio. Problem description: When I play back these movies directly from DVD (with Windows Media Player 11 under Windows Vista), everything works fine. Smooth video, no significant audio problems (only the occasional click). But as soon as I copy any of these DVDs to the HDD and try to play them back from there (e.g. using the wmpdvd://drive/title/chapter?contentdir=path protocol, I get stuttering audio — audio playback sounds like a machine gun for a third of a second or so, approx. every 8 seconds. I have tried converting the VOB files from the DVD to another format (ie. ripping), but that resulted in a noticeable downgrade of picture quality. Therefore I thought it best to keep the files in their original format, if possible. Still, I suspect that the stuttering audio is due to some (de-)muxing problem, and that changing the file format might help. (After all, video playback is fine; therefore I don't think that the hardware is too slow for playback.) Only thing is, I don't know how to convert the VOB files to another Windows Media Player-compatible format without quality loss. I hope someone can help me, or give me further pointers on things I could try out to get HDD playback to work without the problem described. Some things I've tried so far, without any success: VOB2MPG, in order to convert the .vob file to a .mpg file. But that changes only the A/V container, not the content. No re-encoding takes place at all. Re-encoding with MPlayer/MEncoder. Lots of quality loss there, and I frankly haven't got the time to test all possible settings combinations available. Disabling all plug-ins, equalizers, etc. in Windows Media Player. Disabling all hardware acceleration on the audio playback device. Further info on the VOB files I'm trying to playback: The video format is MPEG ES, PAL 720x576 pixels @ 24/25 frames per second. The sound stream is uncompressed PCM, 16-bit stereo @ 48kHz. (Might it help if I somehow re-encoded the sound stream at a lower resolution, or as an MP3? If so, how would I do this without changing the video stream?) P.S.: I am limited to using Windows Media Player (11). (I previously tried MPlayer btw., but the video playback quality was surprisingly bad.)

    Read the article

  • IIS httpTracing setting has no effect

    - by digahill
    I'm trying to troubleshoot some performance issues we are having on a specific ASP.NET page with Microsoft's Perfecto Tool on IIS 7.5. Perfecto uses the ETW hooks build in to IIS to report on specific HTTP request, and is working quite well. However, I only want IIS to emit traces for one specific page, say "Default.aspx" in my TestApp Web Application. Following the instructions on the httpTracing man page, I should be able to add the traceUrls element to my root web.config file for TestApp. This doesn't seem to affect tracing whatsoever when I do so. For example, I've used the following settings in the web.config file and every request that hits the IIS server is sending tracing messages that are in turn picked up by Perfecto. (In the System.WebServer section) <httpTracing> <traceUrls> <add value="/Default.aspx" /> </traceUrls> </httpTracing> I then found that the applicationHost.config file on the server had an empty element. I tried removing this element, as well as the httpTracing element in the web.config. After a machine reboot, I was still getting tracing messages! My understanding is that the presense of the httpTracing element is what controlls whether ETW tracing is on or not. I ensured there was no reference to httpTracing in the machine.config, too. At a loss, I decided to remove the IIS Tracing feature with Server Manager. After a reboot, I no longer got ETW tracing. I then reinstalled IIS Tracing feature with Server Manager. As expected, the httpTracing element reappeared in the applicationhost.config file. Tracing messages began sending again for all sites and pages. I then tried to use the traceUrls element at the applicationhost.config level. This also didn't filter out and traces. I must be misunderstanting something key with how httpTracing works. There aren't many resources on the web to help me, either. Can anyone tell me if what I'm trying should work? Has anyone else had success filtering tracing message per page with traceUrls? I should note that I also tried changing with the following setting in applicationhost.config to "allow". It didn't seem to help. <section name="httpTracing" overrideModeDefault="Allow" />

    Read the article

  • Postfix to deliver mail to a virtual address mailbox

    - by Chloe
    Postfix version 2.6.6, Dovecot Version 2.0.9 I want to setup Postfix + Dovecot. Dovecot seems to be working. I can authenticate. However, the mailbox is empty! Nothing will get delivered! I followed many tutorials on Postfix + Dovecot but they seem to want to complicate things by using Dovecot LDA or MySQL. I just want it to be very simple and having Postfix deliver to the virtual mail boxes are fine. I don't need MySQL either. I already set up a custom password file that Dovecot uses for authentication and I can login to POP3 with SSL. I can see from the logs that Postfix is delivering to the system user accounts (the catch-all), instead of the virtual users that I set up in Dovecot. The SMTP + SSL authentication seems to work also. I can also see from the logs that Dovecot is checking the correct virtual mail folder. I just need to figure out how to get Postfix to deliver to the virtual mail boxes. I have the following which I believe are relevant. Let me know what other settings you need to see: alias_maps = hash:/etc/aliases mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = xxx.com myhostname = mail.xxx.com mynetworks = 99.99.99.99, 99.99.99.99 myorigin = $mydomain relay_domains = $mydestination, xxx.com, domain2.net, domain3.com sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = reject_non_fqdn_sender reject_non_fqdn_recipient reject_unknown_recipient_domain permit_sasl_authenticated check_relay_domains smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_sender_restrictions = check_sender_mx_access cidr:/etc/postfix/bogus_mx reject_invalid_hostname reject_unknown_sender_domain reject_non_fqdn_sender virtual_mailbox_base = /var/spool/vmail virtual_mailbox_domains = xxx.com, domain2.net, domain3.com virtual_minimum_uid = 444 Postfix master.cf: submission inet n - - - - smtpd -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_sasl_type=dovecot -o smtpd_sasl_path=private/auth -o smtpd_sasl_security_options=noanonymous -o smtpd_sasl_local_domain=$myhostname -o smtpd_client_restrictions=permit_sasl_authenticated,reject -o smtpd_sender_login_maps=hash:/etc/postfix/virtual -o smtpd_sender_restrictions=reject_sender_login_mismatch -o smtpd_recipient_restrictions=reject_non_fqdn_recipient,reject_unknown_recipient_domain,permit_sasl_authenticated,reject Dovecot related: mail_location = maildir:~/Maildir passdb { args = /etc/dovecot/users.conf driver = passwd-file } service auth { unix_listener /var/spool/postfix/private/auth { mode = 0660 user = postfix } } The virtual mail user: vmail:x:444:99:virtual mail users:/var/spool/vmail:/sbin/nologin Here is the /var/log/maillog when I try to send something to myself: Oct 25 22:10:05 308321 postfix/smtpd[2200]: connect from user-999.cable.mindspring.com[99.99.99.99] Oct 25 22:10:05 308321 postfix/smtpd[2200]: D224BD4753: client=user-999.cable.mindspring.com[99.99.99.99], sasl_method=LOGIN, [email protected] Oct 25 22:10:06 308321 postfix/cleanup[2207]: D224BD4753: message-id=<7DC3C163CFFC483AB6226F8D3D9969D2@dumbopc> Oct 25 22:10:06 308321 postfix/qmgr[2168]: D224BD4753: from=<[email protected]>, size=1385, nrcpt=1 (queue active) Oct 25 22:10:06 308321 postfix/smtpd[2200]: disconnect from user-999.cable.mindspring.com[99.99.99.99] Oct 25 22:10:06 308321 postfix/local[2208]: D224BD4753: to=<[email protected]>, orig_to=<[email protected]>, relay=local, delay=1.1, delays=0.53/0.02/0/0.51, dsn=2.0.0, status=sent (delivered to mailbox) Oct 25 22:10:06 308321 postfix/qmgr[2168]: D224BD4753: removed

    Read the article

  • Ubuntu Lucid: Erratic screen behaviour after boot

    - by fgysin
    In short: about 50% of the time I have a screwed up monitor setup after reboot. About 50% it is totally correct. Now the longer version: I updated my machine from 9.04 to 10.04 (via 9.10). At first I run into some monitor problems (I have a 3-monitor setup) because of the known bug in the new xserver driver for xinerama. This messes up behaviour if the mouse goes either left or above the screen number 0, i.e. I had to make my left-most monitor screen 0. Everything worked out fine finally, I got my 3-monitor setup back with xinerama enabled to get one big desktop streched over 3 screens. Now the fun part: Every time I start up my machine only one of the 3 monitors gets a signal and is woken up: it only recognizes the left-most monitor (screen 0) and crams all the desktop stuff into this one screen. If I go into nvidia settings I only see one physical device although all 3 are connected and have power. When I look into the xorg.conf I can still see my old setup with 3 devices, 3 screens, xinerama active etc... But I was totally unable to get 3 montitors to work. (I tried unplugging monitors, reconfiguring whole nvidia setup, ...) But it gets even better: When I restart my machine (i.e. choose the restart option from the Ubuntu menu) it shuts down and tries to restart. The restart then gets stuck after showing the Ubuntu splash screen with the 'loading bar' (the moving dots thingy) and I am forced to kill the machine by cutting power. But after the power cut the machine boots up normally and suddenly I get my 3 monitor setup back up working. That is until the next time I shut down and start up, where it all starts over again and I only have one monitor... (see above) I really have a hard time seeing where the error is. It must be that the restart boot somehow differs from the 'normal' boot. But the fact that it gets stuck and I need to cut power which then basically triggers a 'normal' boot does not really support this theory... My setup (please tell me if you need further info): 3 monitors as 3 screens as one desktop (with xinerama) 2 nvidia cards where screen 0 and 1 are on card 0 and screen 2 is on card 1 Ubuntu 10.04 Lucid Lynx (updated from 9.10, 9.04, ....) I would appreciate every idea on the subject, at the moment I really don't have any clue what to do...

    Read the article

  • Plesk Postfix Mail Server 9.5.4 very heavy load, 1000s of processes

    - by Eugene van der Merwe
    Our Plesk Linux Ubuntu 64-bit mail server has extremely high load and we don't know how to isolate it. The load was okay will two weeks ago but in the last two weeks it's seriously deteriorated. The mail server has been running for years and we have had sporadic performance issues. Normally we reduce the load by turning off all SPAM checks until the problem is sorted (which sometimes resolves itself). Currently we have turned of real time block lists, SPF checking and we have attempted to turn off SpamAssassin. No matter what we do the SpamAssassin check box stays ticked in the GUI. Out of desperation we have done /etc/init.d/psa-spamassassin stop. For years we haven't been able to do SpamAssassin because it kills the server. We would like to use it but performance is more important for now. We cannot turn off Greylisting. The moment we turn off Greylisting our help desk is inandated with calls. Out of desperation we investigated truncating the Greylisting database which is now 2.5 GB big but we abandoned this after noticing turning of Greylisting doesn't improve the performance at all. We have no anti-virus. It's just more load and Dr. Web never really worked that well for us. But we'll try that if it will make a difference. We have implemented Postfix Anvil. This seems to have made the situation worse so we disabled it. We’re not sure if this is the case. Our current mail server is configured to forward all SMTP to a relay server. We did so to reduce the load. This helped a lot because outgoing queues are generally empty. We are running in an Expand configuration. The mail server has about 12 000 accounts of which maybe half are active. We have read through this document: http://www.postfix.org/STRESS_README.html but there are too many settings and we don’t know which ones to choose. Please assist urgently. We need advice on how to fix this problem before all our clients abandon is. The only clue we have is that there are 100s of these processes: 30 13205 1 0 13:18 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10027 before-queue 30 13207 1 0 11:38 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10027 before-queue 30 13208 1 0 13:18 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10026 before-remote 30 13209 1 0 11:38 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10026 before-remote 30 13213 1 0 13:18 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10027 before-queue

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >