Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 637/916 | < Previous Page | 633 634 635 636 637 638 639 640 641 642 643 644  | Next Page >

  • What causes Windows Media Player on Windows 8 to not play the entire library?

    - by somequixotic
    Behavior 1: Verify that the WMP playlist is clear of all songs. Turn on the "Shuffle" and "Repeat" features. Double-click on a music track in the Library. Click the "Next" button (double right angle brackets). A random song from any track in the Library is randomly chosen and played. When observing the Playlist (clicking the "Play" tab), the entire contents of the Library appears in the Playlist. Behavior 2: Verify that the WMP playlist is clear of all songs. Turn on the "Shuffle" and "Repeat" features. Double-click on a music track in the Library. Click the "Next" button (double right angle brackets). The button visually depresses like it has registered the click, but nothing happens. Absolutely nothing. Moreover, the "Previous" button is grayed out. When observing the Playlist, only the one song that was double-clicked appears in the Playlist. What causes Behavior 2? I cannot correlate any specific action I've taken with Behavior 2, and Behavior 1 has been the case as long as I can remember, all the way back to Windows XP. Even earlier during my usage of Windows 8, I recall Behavior 1 working correctly. But suddenly, inexplicably, without changing any settings in WMP, Behavior 2 kicked in, and persists after reboots. I've tried sfc /scannow in an administrator prompt. All system files are in order. I've downloaded all Windows Updates and driver updates. I've attempted to alter WMP options and playback settings to no avail. So... what is causing Behavior 2? Is this an intended, valid behavior, or is something malfunctioning? How would I know what that "something" is? How would I go about fixing it without just reinstalling Windows 8 fresh?

    Read the article

  • DHCP forwarding behind access list on a Cisco Catalyst

    - by Ásgeir Bjarnason
    I'm having some trouble with forwarding DHCP from a subnet behind an access list on a Cisco Catalyst 4500 switch. I'm hoping somebody can see the mistake I'm making. The subnet is defined like this: (first three octets of IP addresses and vrf name anonymized) interface Vlan40 ip vrf forwarding vrf_name ip address 10.10.10.126 255.255.255.0 secondary ip address 10.10.10.254 255.255.255.0 ip access-group 100 out ip helper-address 10.10.20.36 no ip redirects I tried turning on a VMWare machine on this subnet that was configured to use DHCP, but I never got a DHCP response and the DHCP server didn't receive a request. I tried putting the following in the access-list: access-list 100 permit udp host 10.10.10.254 host 10.10.20.36 eq bootps access-list 100 permit udp host 10.10.10.254 host 10.10.20.36 eq bootpc access-list 100 permit udp host 10.10.20.36 host 10.10.10.254 eq bootps access-list 100 permit udp host 10.10.20.36 host 10.10.10.254 eq bootpc That didn't help. Can anybody see what the problem is? I know that the DHCP server works; our whole network is running off of this DHCP server I also know that the subnet works because we have active servers running on the network The DHCP scope is already defined on the DHCP server The subnet is correctly defined on the VMWare server (already servers running on the subnet on VMWare) Edit 2012-10-19: This is solved! The subnet had formerly been defined as a /25 network, but was then expanded into a /24 network. When the DHCP scope was altered after this change it was done incorrectly; the gateway was moved to .254, the leasable IP range was in the lower half of the /24 subnet but we forgot to change the CIDR prefix from /25 into /24. This happened some 2 years ago, and we didn't need to use DHCP on this server network again until this week. Thank you MDMarra and Jason Seemann for looking at the question and trying to troubleshoot. Now I'm wondering if I should mark Jason's answer as the accepted answer (I am new to the Stack Exchange network, so I don't know the etiquette of what to do if I misstated the question like in this case).

    Read the article

  • Linux centos trouble with egrep command in words folder

    - by seth
    i need the commands to list these things for a class but for the life of me i cannot figure it out if anyone could offer any insight on how to get so specific with the egrep command or just answer the questions it would be highly appreciated some i have already figured out but if they look wrong any corrections may help too List all words that have the letter a followed immediately by the letter z. egrep {a,}{z,} words List all words that have the letter a followed sometime later by the letter z (there must be at least one letter in between). Egrep {a,?,z} words List all words that start with the letter a and end with the letter z. egrep "^a.*z$" words List all five letter words that start with the letter a and end with the letter z. List all words that start with two capital letters followed immediately by at least one lower case letter. List all words with two consecutive a’s or i’s or u’s. Use {2} to denote “two consecutive” and the pipe character, |, to denote “or”. egrep [a|i|o] {2} words List all words that contain a q where the q is not immediately followed by a u. For instance, queen should not be in your list but Iraqi should be. List all entries in the file that contain at least one non-letter.

    Read the article

  • "postgres blocked for more than 120 seconds" - is my db still consistent?

    - by nn4l
    I am using an iscsi volume on an Open-E storage system for several virtual machines running on a XenServer host. Occasionally, when there is a very high disk I/O load on the virtual machines (and therefore also on the storage system), I got this error message on the vm consoles: [2594520.161701] INFO: task kjournald:117 blocked for more than 120 seconds. [2594520.161787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162194] INFO: task flush-202:0:229 blocked for more than 120 seconds. [2594520.162274] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162801] INFO: task postgres:1567 blocked for more than 120 seconds. [2594520.162882] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. I understand this error message is caused by the kernel to inform that these processes haven't been run for 120 seconds, most likely because a disk access to the storage system has not yet been processed. But what is the effect on the processes. For example, will the postgres process eventually write its data when the storage system is idle again after a few minutes, so that all data is still consistent? Or will it abort the write, leaving some tables in an inconsistent state? I certainly expect that the former should be the case - if the disk access is slow, postgres (or any other affected process) should just wait as long as it takes. I can live with the application hanging for a few minutes. But if there is a chance for data corruption then any of these errors is really bad news. Please advise what to do here.

    Read the article

  • Repairing Damage to VMWare Virtual Disk

    - by Lachlan McDonald
    Evening all, I've got a considerable problem I'm hoping to get some resolution on. I had two VMWare 6.5 virtual machines, one running Ubuntu 9.10 and the other Ubuntu 10.04. I used 9.10 as a testing server, so I could install a LAMP environment to prepare some code. Over the months I took a number of snapshots of this VM just in case something went wrong, and did a full copy of the entire VM a month ago. I created the 10.04 VM when Lucid Lynx launched so I could continue development on a fresh install. To get the files over, I simply added the 9.10 virtual disk into the 10.04 VM, grabbed some of the files I needed, and dismounted it. Unknown to me at the time, the changes to the 9.04 virtual disk meant that I could no longer boot it with the 9.10 VM. I'd always get the "The parent virtual disk has been modified since the child was created." error. I decided this was a good time to backup all the critical files, but now whenever I open the 9.04 disk to get the data it isn't in the same state as it was earlier. My question is; is it possible when I'm mounting the virtual disk that I'm not seeing the most recent snapshot, or in my blundering, have I lost the virtual disk. Cheers

    Read the article

  • I keep losing wireless connection

    - by posfan12
    I have a WRT54GL v1.1 wireless router and a WUSB54G v4 wireless adapter, both made by Linksys. The router is in the living room by the TV and the my computer is in the bedroom. My ISP is Brighthouse. Operating System Microsoft Windows 7 Home Premium 64-bit SP1 CPU Intel Core 2 Duo E6600 @ 2.40GHz 36 °C Conroe 65nm Technology RAM 3.00GB Single-Channel DDR2 @ 333MHz (5-4-4-14) Motherboard eMachines EMCP73VT-PM (CPU 1) 26 °C Graphics ASUS VS247 (1920x1080@60Hz) 767MB GeForce GTX 460 (nVidia) 43 °C Hard Drives 466GB Seagate ST350041 8AS SCSI Disk Device (SATA) 35 °C Optical Drives HL-DT-ST DVDRAM GH41N SCSI CdRom Device Audio High Definition Audio Device The problem is that my Internet connection will work fine for 15 minutes or so. Then the data will just stop flowing. Windows says I am still connected, and the systray icon still shows five bars. But Comodo Firewall will stop showing up and down traffic, and another of my systray applications complains about a lack of connection. What I usually do is either disconnect from the network manually, or unplug and re-plug the USB adapter. At which point the connection will work properly for another 15 minutes. I've tried unplugging my router for 30 seconds and letting it reboot. I've also tried looking for a newer driver for my adapter but I seem to have the latest version 3.1.3.0. This is a recent problem starting about a week ago. For the previous several months things were working just fine. I haven't made any changes to my system that I am aware of. The only thing I did was open my case to blow the dust out of it, then put everything back together. How do I fix this issue?

    Read the article

  • btrfs: can i create a btrfs file system with data as jbod and metadata mirrored

    - by Yogi
    I am trying to build a home server that will be my NAS/Media server as well a the XBMC front end. I am planning on using Ubuntu with btrfs for the NAS part of it. The current setup consists of 1TB hdd for the OS etc and two 2TB hdd's for data. I plan to have the 2TB hdd's used as JBOD btrfs system in which i can add hdd's as needed later, basically growing the filesystem online. They way I had setup the file system for testing was while installing the OS just have one of the HDD's connected and have btrfs on it mounted as /data. Later on add another hdd to this file system. When the second disk was added btrfs made as RAID 0, with metadata being RAID 1. However, this presents a problem: even if one of the disk fails I loose all my data (mostly media). Also most of the time the server will be running without doing any disk access, i.e. the HDD's can be spun down, when a access request comes in this with the current RAID 0 setup both disks will spin up. in case I manage a JBOD only the disk that has the file needs to be spun up. This should hopefully reduce the MTBF for each disk. So, is there a way in which I can have btrfs setup such that metadata is mirrored but data stays in a JBOD formation? Another question I have is this, I understand that a full drive failure in JBOD will lose data on the drive, but having metadeta mirrored across all drives, will this help the filesytem correct errors that migh creep in (ex bit rot?) and is btrfs capable of doing this.

    Read the article

  • How to whitelist external access to an internal webserver via Cisco ACLs?

    - by Josh
    This is our company's internet gateway router. This is what I want to accomplish on our Cisco 2691 router: All employees need to be able to have unrestricted access to the internet (I've blocked facebook with an ACL, but other than that, full access) There is an internal webserver that should be accessible from any internal IP address, but only a select few external IP addresses. Basically, I want to whitelist access from outside the network. I don't have a hardware firewall appliance. Until now, the webserver has not needed to be accessible externally... or in any case, the occasional VPN has sufficed when needed. As such, the following config has been sufficient: access-list 106 deny ip 66.220.144.0 0.0.7.255 any access-list 106 deny ip ... (so on for the Facebook blocking) access-list 106 permit ip any any ! interface FastEthernet0/0 ip address x.x.x.x 255.255.255.248 ip access-group 106 in ip nat outside fa0/0 is the interface with the public IP However, when I add... ip nat inside source static tcp 192.168.0.52 80 x.x.x.x 80 extendable ...in order to forward web traffic to the webserver, that just opens it up entirely. That much makes sense to me. This is where I get stumped though. If I add a line to the ACL to explicitly permit (whitelist) an IP range... something like this: access-list 106 permit tcp x.x.x.x 0.0.255.255 192.168.0.52 0.0.0.0 eq 80 ... how do I then block other external access to the webserver while still maintaining unrestricted internet access for internal employees? I tried removing the access-list 106 permit ip any any. That ended up being a very short-lived config :) Would something like access-list 106 permit ip 192.168.0.0 0.0.0.255 any on an "outside-inbound" work?

    Read the article

  • How to write re-usable puppet definitions?

    - by Oliver Probst
    I'd like to write a puppet manifest to install and configure an application on target servers. Parts of this manifest shall be re-usable. Thus I used define for defining my re-usable functionality. Doing so, I've always the problem that there are parts of the definition which are not re-usable. A simple example is a bunch of configuration files to be created. These file must be placed in the same directory. This directory must be created only once. Example: nodes.pp node 'myNode.in.a.domain' { mymodule::addconfig {'configfile1.xml': param => 'somevalue', } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', } } mymodule.pp define mymodule::addconfig ($param) { $config_dir = "/the/directory/" #ensure that directory exits: file { $config_dir: ensure => directory, } #create the configuration file: file { $name: path => "${config_dir}/${name}" content => template('a_template.erb'), require => File[$config_dir], } } This example will fail, because now the resource file {$config_dir: is defined twice. As far as I understood, it is required to extract these parts into a class. Then it looks like this: nodes.pp node 'myNode.in.a.domain' { class { 'mymodule::createConfigurationDirectory': } mymodule::addconfig {'configfile1.xml': param => 'somevalue', require => Class ['mymodule::createConfigurationDirectory'], } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', require => Class ['mymodule::createConfigurationDirectory'], } } But this makes my interface hard use. Every user of my module has to know, that there is a class which is additionally required. For this simple use case the additional class might be acceptable. But with growing module complexity (lots of definitions) I'm a bit afraid of confusing the modules user. So I'd like to know is there a better way to handle this dependencies. Ideally, classes like createConfigurationDirectory are hidden from the user of the modules api. Or are there some other "Best Practices"/Patterns handling such dependencies?

    Read the article

  • System fans connected to a Gigabyte Z77-D3H motherboard do not increase in speed

    - by Andrew
    The motherboard (Gigabyte Z77-D3H) controls my 3-pin CPU fan just fine. My system fans are a 3-pin fan (plugged into SYS_FAN1), and a 4-pin fan plugged into SYS_FAN3. All 3 of the system fan headers are 4-pin, but the user manual states that SYS_FAN1 is really a 3-pin header (that it can control the speed of a 3-pin fan) and the 4th pin is just a reserve. All my fans have a max RPM of 2000. Normally, all the fans run around 1000 RPMs when I'm not doing anything intensive. This proves that the motherboard can set the speed. However,when I run Folding@Home and my temperatures increase (around 70C) only the CPU fan increases to around 2000 RPMs. The system fans stay around 1000 RPMs. Through the BIOS I am able to disable the system fan control and the system fans then run at max RPMs (meaning the motherboard was doing something). I've updated the BIOS to the latest version and tried out Speedfan, but neither helped my situation. What I'd like is for the system fans to ramp up their RPMs as needed. Any thoughts? Tl;dr: My system (case) fans, but not my CPU fan, are always stuck around 1000 RPMs out of 2000 no matter the temperature.

    Read the article

  • Copying symbolic links and filenames with special characters to NAS

    - by Mr E
    I have a new Western Digital My Book Live NAS. I am trying to copy files from an old drive to the NAS. I'm using Ubuntu 12.04 and I've mounted the drive by browsing the network in Nautilus and choosing a shared folder configured on the NAS. The shared folder is then automatically mounted at .gvfs/files on mybooklive. There are two problems so far: File names and directory names containing certain characters (e.g. : or |). Attempting to copy these results in the error message: cp: cannot stat `/path/to/destination.filename': Invalid argument Symbolic links. In Nautilus I get the error message: Symlinks not supported by backend My questions are: Can I connect to the NAS or configure the NAS so that I can copy my files without this problem? (In case it matters, I don't need Windows compatibility.) If not, what can I do to identify all the problem files? Can I do anything to automatically fix my filenames Please let me know if any of this needs clarification. I'm not too familiar with all of this so I may have left out some useful information.

    Read the article

  • OS X AFP shares and access

    - by gbrandt
    I am running 10.5.6 Client as a mini server and am having problems with AFP shares. All clients are OS X 10.5.7 I have created three users for 'File Sharing' only on the 'server'. I have created groups and placed these users into specific groups. I have created ACL's to give each group access to certain shares. Two of those users can read and write to any share, one user cannot write to the shares, with different results: when copying a directory, only the directory is created, no files inside are copied, the OS does not give any errors when copying a single file I get three dialogs: "You may need to enter the name and password for an administrator on this computer to change the item named 'xxxx', "The item 'xxxxx' contains one or more items you do not have permission to read. Do you want to copy the items you are allowed to read?, and, The operation cannot be completed because you do not have sufficient priveleges for some of the items. With the single file, a file gets created on the server, but is empty. My ACL for the group this user belongs to is: 0: group:projectmembers allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit 1: group:informationtechnology inherited allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit 2: group:executive inherited allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit 3: group:everyone inherited deny list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,file_inherit,directory_inherit User 1 & 2 belong to informationtechnology and executive and projectmembers, they can read and write freely on the share. User 3 belongs to projectmembers and cannot read and write freely. I have read that this is a UID issue, however User 1 & 2 do not have matching UID's across clients and server and they work, so I don't think this is the case. Any ideas?

    Read the article

  • Software distribution from web server to client using PHP/FTP

    - by Jenolan
    I develop and maintain a number of add-ons and utilities for various widget (mainly aMember) which generally means I need to install php based codes onto other people's systems. Whilst I have a VPS and have access to rsync and all sorts of yummy tools most of the people I deal with have a basic ftp access and that's all folks. To upload from my local system is also a problem as I am satellite based (two-way) so it is fairly slow and expensive and in any case the files are already on my server. So there is no rsync, fxp, ssh and I can't really install anything as it is obviously not my system, they would be justifiably miffed if I started installing file managers or other things onto their sites. What I have been trying to find is a utility that I can run on my server from the web, preferably php based, that will be like a file manager but a bit different. Two panels. LH-Side the local server .. pretty much like a standard FM application RH-Side ability to login via FTP to the clients system Then I can fiddle as required. The closest thing I have found is net2ftp but it doesn't have the gui interface, at the moment I simply ssh into my server power up ncftp and run that way, but something easier to use would be mucho niceness. Thanks in advance! Larry

    Read the article

  • Installing MySQL 5.1 on OS X 10.7 Lion

    - by xisal
    I am trying to install MySQL 5.1. I am on Lion, and when I remove all files associated with MySQL on my machine it still tells me that I have a newer version installed when I try to install it from the DMG file. Has anyone successfully installed MySQL 5.1 on Lion? I found a solution using Homebrew: Completely remove MySQL from your system (just in case) sudo rm /usr/local/mysql sudo rm -rf /usr/local/mysql* sudo rm -rf /Library/StartupItems/MySQLCOM sudo rm -rf /Library/PreferencePanes/My* vim /etc/hostconfig and removed the line MYSQLCOM=-YES- rm -rf ~/Library/PreferencePanes/My* sudo rm -rf /Library/Receipts/mysql* sudo rm -rf /Library/Receipts/MySQL* sudo rm -rf /var/db/receipts/com.mysql.* Source:http://stackoverflow.com/questions/1436425/how-do-you-uninstall-mysql-from-mac-os-x Install homebrew /usr/bin/ruby -e "$(curl -fsSL https://raw.github.com/gist/323731)" Source: https://github.com/mxcl/homebrew/wiki/installation Install MySQL 5.1 via brew brew install mysql51 if that doesn't work, do this: brew install https://raw.github.com/adamv/homebrew-alt/master/versions/mysql51.rb Source: http://stackoverflow.com/questions/4359131/brew-install-mysql-on-mac-os/6399627#6399627 Make MySQL Work Create mysql.sock file touch /tmp/mysql.sock Install MySQL default tables /usr/local/Cellar/mysql51/5.1.58/bin/mysql_install_db ...or your path Source: http://stackoverflow.com/questions/4788381/getting-cant-connect-through-socket-tmp-mysql-when-installing-mysql-on-ma/5140849#5140849

    Read the article

  • Networking problems in VMWare with wireless bridge

    - by Robert Koritnik
    Barebone data: virtualization: VMWare Workstation 6.5 (latest) Host: Windows Server 2008 x64 Guest: Windows Server 2008 x86 Host network adapter: wireless Guest network adapter 1: over Bridge VMNet (automatic) Guest network adapter 2: over Host only VMNet Problem When I surf the net within VM my internet connection just gets stalled (not dropped). It doesn't experience any timeout whatsoever, it just stops downloading/communicating. For instance: I start downloading a file with a browser (IE/FF/CR doesn't matter) and I have to pause/restart download when speed drops to 0. I could wait indefinitelly but connection won't pickup automatically. What did I miss in my network configuration? Update 1 I've tested this in various combinations. This works fine when host is connected via Ethernet. But when connected via Wifi, the connection on the guest works as previously described. It connects fine. It gets a valid IP from DHCP... Everything is cool as long as you don't start doing some intensive network traffic (ie. download a 2MB file) In this case it starts downloading and stops after a while. Speed just drops to 0B/s... Sometimes it picks up back, sometimes it doesn't. Connection still stays and works. I can ping around with no problem.

    Read the article

  • Skip all warning prompts on ACPI shutdown?

    - by N Rahl
    When I issue an ACPI shutdown command to a Windows XP guest machine from the host VM server, I want Windows to shutdown. The problem is, Windows always wants to ask some question or another, rather than just shutting down. I need shutdown to be reliable, no matter what is running or going on, so I can automate shutdowns from the host machine. But I want it to be as graceful as possible, rather than just pulling the plug. Some problems: If a user is logged in, ACPI shutdown causes a box to appear that says, "are you sure you want to shutdown while other users are logged in"? And this prevents shutdown until someone connects to the machine and clicks "yes". In this case, it should try its best to gracefully log out all users, using force if necessary, and then shutdown without promoting. Busy or non-responding programs or programs asking to save data can prevent Windows from shutting down until a user answers a prompt. This should attempt to save data, wait maybe 30 seconds for non-responding programs, but should get aggressive with stubborn programs. "nope, time's up! 3,2,1, Goodbye!" Is there a registry setting that I can change from: ACPI_Shutdown: "Shut down if Windows feels like it" to ACPI_Shutdown: "Just do it. Kill programs, bump users, try to be graceful about it, but when I come back, I expect you to be off." This should respond to the ACPI shutdown command, and not be a script on windows, unless that script is triggered by the ACPI power button. I'm hoping this can be changed with registry options.

    Read the article

  • If I can take a screen capture of a graphical anomaly, can it still be a hardware issue?

    - by Jay Carr
    I have a strange graphical anomaly going on my iMac right now (green and magenta boxes are appearing sporadically), I'm slowly trying to work through different possibilities but I thought I'd start with the basics: Can a graphical anomaly that I can screen capture still be a hardware issue? I know, it seems really obvious. If it's hardware, it should show up well after the operating system has had it's say. And since the operating system is (I assume) doing the screen capture, it seems like it shouldn't see the anomaly unless the problem is software in nature. But, as I've researched this problem I see a lot of people taking their computers in to service people for hardware issues and Apple then resolving said issue. To further complicate things, I also have Windows 8 installed via bootcamp, and the issues seems to be showing up there as well. Anyway, it feels like it must be a driver issue, since I assume that's what the two OSes have in common, but...I thought I'd come here for some disambiguation. In my case, yes, I can screen capture the anomaly (at least in OSX I can), so I assume it's somehow a software (or driver) issue. But I wanted to double check because the internet is being ambiguous...

    Read the article

  • F5 Networks iRule/Tcl - Escaping UNICODE 6-character escape sequences so they are processed as and r

    - by openid.malcolmgin.com
    We are trying to get an F5 BIG-IP LTM iRule working properly with SharePoint 2007 in an SSL termination role. This architecture offloads all of the SSL processing to the F5 and the F5 forwards interactive requests/responses to the SharePoint front end servers via HTTP only (over a secure network). For the purposes of this discussion, iRules are parsed by a Tcl interpretation engine on the F5 Networks BIG-IP device. As such, the F5 does two things to traffic passing through it: Redirects any request to port 80 (HTTP) to port 443 (HTTPS) through HTTP 302 redirects and URL rewriting. Rewrites any response to the browser to selectively rewrite URLs embedded within the HTML so that they go to port 443 (HTTPS). This prevents the 302 redirects from breaking DHTML generated by SharePoint. We've got part 1 working fine. The main problem with part 2 is that in the response rewrite because of XML namespaces and other similar issues, not ALL matches for "http:" can be changed to "https:". Some have to remain "http:". Additionally, some of the "http:" URLs are difficult in that they live in SharePoint-generated JavaScript and their slashes (i.e. "/") are actually represented in the HTML by the UNICODE 6-character string, "\u002f". For example, in the case of these tricky ones, the literal string in the outgoing HTML is: http:\u002f\u002fservername.company.com\u002f And should be changed to: https:\u002f\u002fservername.company.com\u002f Currently we can't even figure out how to get a match in a search/replace expression on these UNICODE sequence string literals. It seems that no matter how we slice it, the Tcl interpreter is interpreting the "\u002f" string into the "/" translation before it does anything else. We've tried various combinations of Tcl escaping methods we know about (mainly double-quotes and using an extra "\" to escape the "\" in the UNICODE string) but are looking for more methods, preferably ones that work. Does anyone have any ideas or any pointers to where we can effectively self-educate about this? Thanks very much in advance.

    Read the article

  • Rails/Mongo across multiple different geo-regions

    - by wmarbut
    I have a system that by necessity requires physical presence in three or more different locations and I need advice on structuring in such a way that my database stays replicated in a timely manner without horrible latency. I've seen mysql access and replication be incredibly slow when the application server was trying to talk to a node that wasn't physically collocated. In this case I am using mongodb. The stack is linux/passenger/ruby/rails/mongodb. The database is write heavy and read light. The infrastructure is Amazon EC2 The application layer must be physically located in 3 or more different locations. I can't justify this requirement further than it is a requirement. The database, however needn't be located in more than one location if it can be written to quickly from other locations. From reading mongo's documentation, mongo replication seems like more of a candidate than sharding b/c my datastore is not huge. However I don't see anything that addresses the issue of speed for servers communicating across large distances with potentially high latency.

    Read the article

  • Does Windows XP automatically reassemble UDP fragments?

    - by Matt Davis
    I've got a Windows application that receives and processes XML messages transmitted via UDP. The application collects the data using Windows "raw" sockets, so the entire layer 3 packet is visible. We've recently run across a problem that has me stumped. If the XML messages (i.e., UDP packets) are large (i.e., 1500 bytes), they get fragmented as expected. Ordinarily, this will cause the XML processor to fail because it attempts to process each UDP packet as if it is a complete XML message. This is a known short-coming in the system at this stage of its development. On Windows 7, this is exactly what happens. The fragments are received and logged, but no processing occurs. On Windows XP, however, the same fragments are seen, and the XML processor seems to handle everything just fine. Does Windows XP automatically reassemble UDP fragments? I guess I could expect this for a normal UDP socket, but it's not expected behavior for a "raw" socket, IMO. Further, if this is the case on Windows XP, why isn't the behavior the same on Windows 7? Is there a way to enable this?

    Read the article

  • Mac OS X L2TP VPN won't connect

    - by smokris
    I'm running Mac OS X Server 10.6, providing an L2TP VPN service. The VPN works just fine when connecting from all computers except one --- this one computer stays at the "Connecting..." stage for a while, then says "The L2TP-VPN server did not respond". In the console, I see this: 6/7/10 10:48:07 AM pppd[341] pppd 2.4.2 (Apple version 412.0.10) started by jdoe, uid 503 6/7/10 10:48:07 AM pppd[341] L2TP connecting to server 'foo.bar.baz.edu' (256.256.256.256)... 6/7/10 10:48:07 AM pppd[341] IPSec connection started 6/7/10 10:48:07 AM racoon[342] Connecting. 6/7/10 10:48:07 AM racoon[342] IKE Packet: transmit success. (Initiator, Main-Mode message 1). 6/7/10 10:48:08 AM racoon[342] IKE Packet: receive success. (Initiator, Main-Mode message 2). 6/7/10 10:48:08 AM racoon[342] IKE Packet: transmit success. (Initiator, Main-Mode message 3). 6/7/10 10:48:08 AM racoon[342] IKE Packet: receive success. (Initiator, Main-Mode message 4). 6/7/10 10:48:08 AM racoon[342] IKE Packet: transmit success. (Initiator, Main-Mode message 5). 6/7/10 10:48:11 AM racoon[342] IKE Packet: transmit success. (Phase1 Retransmit). 6/7/10 10:48:14 AM racoon[342] IKE Packet: transmit success. (Phase1 Retransmit). 6/7/10 10:48:17 AM racoon[342] IKE Packet: transmit success. (Phase1 Retransmit). ...and the "retransmit" messages continue until the error message pops up. So far I've unsuccessfully tried: rebooting deleting the VPN profile and recreating it verifying the client's internet connection (it is able to reach the VPN server) connecting through several different networks (in case a router was blocking VPN packets) disabling the Mac OS X Firewall on the client making sure that the VPN settings exactly match those of other working computers running software update (the client is on 10.6.3) Any ideas?

    Read the article

  • Linux: Force fsck of a read-only mounted filesystem?

    - by Timothy Miller
    I'm developing for a headless embedded appliance, running CentOS 6.2. The user can connect a keyboard, but not a monitor, and a serial console would require opening the case, something we don't want the user to have to do. This all pretty much obviates the possibility of using a recovery USB drive to boot from, unless all it does is blindly reimage the harddrive. I would like to provide some recovery facilities, and I have written a tool that comes up on /dev/tty1 in place of getty to provide these functions. One such function is fsck. I have found out how to remount the root and other file systems read-only. Now that they are read-only, it should be safe to fsck them and then reboot. Unfortunately, fsck complains to me that the filesystems are mounted and refuses to do anything. How can I force fsck to run on a read-only mounted partition? Based on my research, this is going to have to be something obscure. "-f" just means to force repair of a clean (but unmounted) partition. I need to repair a clean or unclean mounted partition. From what I read, this is something "only experts" should do, but no one has bothered to explain how the experts do it. I'm hoping someone can reveal this to me. BTW, I've noticed that e2fsck 1.42.4 on Gentoo will let you fsck a mounted partition, even mounted read-write, but it seems only to do so if fsck is run from a terminal, so it can ask the user if they're sure they want to do something so dangerous. I'm not sure if the CentOS version does the same thing, but it appears that fsck CAN repair a mounted partition, but it flatly refuses to when not run from a terminal. One last-resort option is for me to compile my own hacked fsck. But I'm afraid I'll mess it up in some unexpected way. Thanks! Note: Originally posted here.

    Read the article

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • Segmentation fault on login to mysql

    - by numberwhun
    Hello everyone! I recently did a fresh install of Ubuntu on my laptop (HP dv7, AMD Dual Core with 4 gigs RAM). I am working on installing my development environment and tools and one of the first things I was working on is getting MySQL installed. The following was my configure statement with options: ./configure --prefix=/usr/local/mysql --with-big-tables --with-unix-socket-path=/usr/local/mysql/tmp/mysql.sock --with-named-curses-libs=/lib/libncurses.so.5.7 After I did the make;make install, I did the post configuration such as setting the root password and installing the mysqld daemon in its rightful place. My issue is when I try to log in to mysql to start using it, the following shows what happens: $ mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.1.42 Source distribution Segmentation fault I have searched Google extensively, I have searched through the mysql bugs database and I have yet to find anything that matches my issue. Here is the contents of my my.cnf file, in case you want to see it: $ cat /etc/my.cnf [mysqld] basedir=/usr/local/mysql datadir=/usr/local/mysql socket=/usr/local/mysql/tmp/mysql.sock [mysql.server] user=mysql #basedir=/var/lib [client] socket=/usr/local/mysql/tmp/mysql.sock [mysqld_safe] err-log=/usr/local/mysql/logs/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I am really hoping that someone here can tell me what has gone wrong with my installation as I would really love to know. I welcome and look forward to all responses. Thank you in advance! Best regards, Jeff

    Read the article

  • Follow cursor location from kile to evince.

    - by D Connors
    I know the title is probably not very clear, so I'll try to be as clear as possible here. I'm running xubuntu on my netbook, and I'm using kile for my latex editing. Since kile is native to kde, I had to manually set it to open pdfs and dvis on evince instead of okular. Now, last time I played around with LaTeX I was using TeXnic center on windows, and it had a very neat feature. Whenever I hit "QuickBuild", not only would it open the output .dvi file, but it would also show me exactly the piece of text I was editing. That is, if I were editing line 13 of the 7th of my document, when I compiled it, the dvi viewer would automatically take me to line 13 on the 7th page of the document, so I wouldn't have to scroll all the way down to it every time I compiled the .tex file. I'm guessing this is a pretty standard feature, and kile probably supports it. But since I don't know what it's called, I'm trying to be clear as to what I'm talking about. Problem is, this feature is not working for me right now, and I'm guessing it's either because evince does not support it, or because I have to manually configure it. Which one is it? And how do I manually configure it, if that's the case?

    Read the article

< Previous Page | 633 634 635 636 637 638 639 640 641 642 643 644  | Next Page >