Search Results

Search found 22701 results on 909 pages for 'missing features'.

Page 701/909 | < Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >

  • Why doesn't the value in /proc/meminfo seem to map exactly to the system RAM?

    - by Eric Asberry
    The values in /proc/meminfo for MemTotal don't make sense. As a human, eyeballing it, it seems to roughly correspond to the installed RAM, but for using it to display the installed RAM from an automated utility it appears to be inexact, and inconsistent. For a system with 1G of RAM, I would expect the MemTotal line to have a value of 1048576 - 1024*1024. But instead, I'm seeing 1029392. On another 4G box, I'm seeing 3870172, which is not a multiple of 1024, and it's not even close to 1029392*4. On an 8G box, I get 8128204, which again seems to have no correlation to the other values, nor is it a multiple of 1024. I'm trying to use this information to report the RAM on a status web page. My work-around is to just "round" it to the nearest 1G multiple, but I'd like to understand why these values seem inconsistent and don't match my expectations. Can somebody fill me in on what I'm missing here? EDIT: To expand on the accepted answer below.... The reference can be found here. Also of interest to me from that page, which explains the inconsistency, is this bit: meminfo: Provides information about distribution and utilization of memory. This varies by architecture and compile options. ...

    Read the article

  • Moving from 1 Linux Partition to Many over USB Mount

    - by Mistiry
    We have devices which use Compact Flash for storage. They work OK, but we recently got industrial-grade CF cards to start using. One of the major problems we get is corruption on the flash card. As it is now, these flash cards run Debian with everything in a single partition. We want to have multiple partitions on the new industrial CF cards to help avoid some of the corruption problems. I booted up the device, and attached a USB CF reader. I then used fdisk to partition the CF card in the USB reader. How can I move the data to these partitions so that it works? I have a partition for each of these directories: /lib /var /root /boot /tmp /home /etc / swap space I imagine I can't just use rsync - do I need to attach a second CF reader with a copy of the CF card, so that it's not active and in-use - and then copy from the first reader to the second? How will the system know where to find its files? I know I'd have to change fstab, but that resides in /etc, which will be on a separate partition...how will it find the fstab file if it can't find /etc? And what about grub? I'm at a loss, perhaps its just because I'm under the weather, or I'm just missing a piece of logic here... Any help is greatly appreciated, this is somewhat urgent as our existing stock is nearing its end and we don't want to purchase anything but these industrial cards, but need to get it working with partitions.

    Read the article

  • Exchange 2010: Send emails via STMP with custom From address to outside the domain

    - by marsze
    The requirement(s): (1) Connect to Exchange via STMP and (2) basic authentication and send emails with a (3) custom From address to (4) recipients outside the domain. I was able to get (1) - (3) working. I created a dedicated receive connector for this task and configured it like this: Permissions: ms-Exch-SMTP-Accept-Any-Recipient (for authenticated users) ms-Exch-SMTP-Accept-Authoritative-Domain-Sender (for authenticated users) ms-Exch-SMTP-Accept-Any-Sender (for authenticated users) Authentication: TLS Basic Authentication (without TLS) Exchange Server Authentication However, I'm still struggeling with (4): I can send with "fake" From addresses to recipients inside the domain. Also, I can send with the original From address to recipients outside the domain. Can you tell me what I'm missing, to configure Exchange to send emails with changed From addresses to recipients outside the domain? (Or is this even possible at all?) Thanks. UPDATE I have to correct myself: it seems to be working after all. There must be some issue with the mailbox I used for testing. It turned out it's working with other external mailboxes. However, I still have no idea what was different there... Anyways, you can take this as a documentation on how to configure Exchange in such a way ;)

    Read the article

  • How can I document and automate a system's configuration?

    - by Diomidis Spinellis
    Having a system's configuration represented by its current state is risky, inefficient, and opaque. At some point you may be left with an unsupported system and no upgrade path. Then configuring a new system compatible with the old is a process or trial and error. Furthermore, if at some point the system is damaged the only option is to go back to the most recent full backup, and try to remember what changes followed from that point. Also, the only way to create a system compatible with the original is through a complete dump/restore. Finally, in such a setup there's no way to know how you solved a particular problem; the only thing you can do is to look at the corresponding configuration files and try to guess what you changed to achieve the desired effect. Currently for each system I maintain, I keep a log file where I record all system administration activity, starting from the installation: installation options, added packages, changes in configuration files, updates, problem fixes etc. In theory this allows me to (manually) replay all changes to arrive at the current state, or to unroll an erroneous change by executing the reverse commands. However, this process is also inefficient, error-prone, and relies on human judgment. Another thing I've tried is to put /etc configuration files under version control with git. This helps me document the changes automatically and also apply them on a clean setup. But it's not without problems: git has to run under sudo, passwords and private keys may be stored in the repository, installed packages can't be meaningfully tracked, and git will have a fit if I try to extend this approach to all the system's directories. I've also thought about performing all changes through shell scripts or makefiles, but I think this process will require a lot of effort and will be fragile. Are there some better methods or tools that I'm missing?

    Read the article

  • Using GPO to collect data about VMware view activity

    - by MoSiAc
    Our security group wants us to begin logging data for external access to our view enviroment. At first we thought that view security would be logging all source ip's that are external in nature so if for some reason there is an intrusion we would have record of it there. Of course our firewall logs all that information but correlating it to view is sketchy at best with our current implementation. We know on viewdesktops there is a set of keys in VolitateEnviroment that contains stuff such as source ip and username, etc. We have a script in place that, when run as a logon script attached to a user account in AD collects the information as we need it. If we have a GPO run the same script the information does not get collected. We feel like there is a piece of the puzzle we're missing but we don't know what. If anyone knows what we're forgetting or misconfiguring that would be great, or if you have a better way of us collecting external source ip's for view specifically we'd be interested in that as well. Thanks, EDIT CODE Batch script to dump to text file @echo off timeout 20 echo %computername%/%username% %time% %date% c:\vdi\vmware.txt echo ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~c:\vdi\vmware.txt reg query "HKEY_CURRENT_USER\Volatile Environment" /v "ViewClient_LoggedOn_Username"c:\vdi\vmware.txt reg query "HKEY_CURRENT_USER\Volatile Environment" /v "ViewClient_IP_Address"c:\vdi\vmware.txt echo.c:\vdi\vmware.txt VB Script to display values Const HKEY_CURRENT_USER = &H80000001 Set wmiLocator=CreateObject("WbemScripting.SWbemLocator") Set wmiNameSpace = wmiLocator.ConnectServer(".", "root\default") Set objRegistry = wmiNameSpace.Get("StdRegProv") sPath = "Volatile Environment" lRC = objRegistry.GetStringValue(HKEY_CURRENT_USER, sPath, "ViewClien_Machine_Name", vMachine) lRC = objRegistry.GetStringValue(HKEY_CURRENT_USER, sPath, "ViewClien_IP_Address", vIP) lRC = objRegistry.GetStringValue(HKEY_CURRENT_USER, sPath, "ViewClien_MAC_Address", vMAC) msgbox "The Remote Device Name is " & vMachine & " @ " & vIP & " (" & vMAC & ") " he wanted me to mention that the batch file actually runs and I can see it counting down when I reconnect but it does not grab the registry values.

    Read the article

  • Hyper-V VM's cannot access Host resources, and vice-versa

    - by Agent
    I have several Hyper-V vm's running on this Win2008 R2 Server box, and up until a reboot of the host server, all the VM's were able to access shared folders on the host. Now, they can't even ping the host server. From what I've seen, I need to setup an Internal only network through Virtual Network Manager in Hyper-V. I set this up, then tried to enable the Microsoft Virtual Network Switch Protocol option in this Internal Only NIC, but I get popups saying: Your current selection will also disable the following features: Microsoft virtual network switch protocol Which is absolutely stupid, considering the protocol is what I'm ticking the checkbox to Enable! As of now, on the host, I have 2 NICs: Physical - This NIC on the host machine does have the MVNS protocol enabled Virtual Network Adapter - Created through Hyper-V Virtual Network Manager as an External type of network. Trying to enable MVNS on this NIC also produces the error above. I've tried enabling Client for Microsoft Networks on the physical NIC for IPv6, but everytime I do that, all the VMs lose Internet connectivity and I cannot RDP into them. Anything else I can try?

    Read the article

  • "Windows failed to start" loop with 0xc0000225. No install discs, EasyRE/USB iso hasn't worked

    - by mvidaure
    I've been suffering from this "Windows failed to start" loop with 0xc0000225 for 3 days now and I still can't fix it. The major problem is that I don't have any sort of installation disc. However, I have tried EasyRE via both CD and USB but both result in the same problem.  I try to perform an 'Automated Repair' on my computer and I get in red text "The selected partition is corrupted and could not be accessed or repaired. Please select a different drive to continue." It is also labeled as NO under Active. Since I do not have a the installation discs, I made a USB with a Windows_7_Recovery_Disc  iso (as shown here http://www.sevenforums.com/tutorials/31541-windows-7-usb-dvd-download-tool.html) but it also doesn't work. I get a blue screen that says "RECOVERY You pc needs to be repaired. The application or operating system could not be uploaded because a required file is missing or contains errors... File:\WINDOWS\system32\winload.efi Error code: 0xc0000225 You'll need to use the recovery tools on your installation media. If you don't have any installation media, contact your system administrator or PC manufacturer." Thanks in advance! Miguel

    Read the article

  • Unexpected behaviour in a Lotus Notes programmable table

    - by Mark B
    I'm designing a workflow database in Lotus Notes 6.0.3 (soon upgrading to 8.5), and my OS is Windows XP. I have recently tried converting a tabbed table into a programmable one. This was so that I could control which tab was displayed to the user when it was opened, so that they were presented with the most appropriate one for that document's progress through the workflow. That part of it works! One of the tabs features a radio button that controls visibility of the next tab, and a pair of cascading dialogue boxes. One contains the static list "Person":"Team", and the other has a formula based on the first: view:=@If(PeerReview = "Team"; "GroupNames"; "GroupMembers"); @Unique(@DbColumn(""; ""; view; 1)) The dialogue boxes have the property "Refresh fields on keyword change" selected. The behaviour that I wasn't expecting is this. If the radio button is set to "Yes" and a value is selected in one of the dialogue boxes, the table opens the next tab. If the radio button is set to "No" and a value is selected in one of the dialogue boxes, the entire table is hidden. I can duplicate the latter by switching off the "Refresh fields on keyword change" property on the dialogue boxes and instead pressing F9 after selecting a value. I have no idea why the former occurs, though. The table is called "RFCInfo", and I have a field on the form called "$RFCInfo" which is editable, hidden from all users who aren't me and initially set by a Postopen script, which I can post if necessary - it's essentially a Select Case statement that looks at a particular item value and returns the name of the table row relating to that value. Can anyone offer any pointers?

    Read the article

  • IIS Web Farm Framework servers are automatically set to "unavailable" even when they are healthy... And they never return to the available state!

    - by JohannesH
    I have 2 web farm configurations, one with 2 member servers and one with 3 member servers. I have health monitoring set up on both farms and the monitoring tool reports all servers as being healthy. However after a while all the servers are marked as being "Unavailable" and "Healthy" in the "Monitoring and Management" screen (in the "Servers" screen they are all listed with "Yes" in the "Ready for Load Balancing" column). Viewing the event log on both the web farm controller or any of farm servers doesn't reveal anything interesting. there are no warnings or errors in the period where the servers became unavailable. There are a couple of informational events about the worker process getting shut down due to inactivity but I don't hope this is the cause since that would mean that the farms will die during the night when the load is low. Am I missing something? EDIT: Btw, I think its very odd that the application pool shuts down on the servers since the health monitoring system is polling an aspx page on each server. Shouldn't that keep them going? EDIT2: Now I've also experienced this problem with the RTW version of Web Farm Framework 2.

    Read the article

  • ImageMagick installation confusion

    - by Codemonkey
    Well this is turning out to be a pain. I'm on CentOS 6.5. I saw on the PECL ImageMagick changelog that they added a load of constants for new filters, such as LANCZOS2SHARP. Some testing I did earlier suggested that PhotoShop 6.5 was able to downsize photos with better clarity than my current ImageMagick's best effort of LANCZOS, so I thought I'd try to upgrade to test out the newer filters. So, first port of call - get source from pecl for 3.2.0RC1. Installed with no problems. But, ah-ha. Although it says it only requires IM 6.2.4, the filters I'm after don't work unless you have 6.6.6+ So I go http://www.imagemagick.org/script/install-source.php to install the newest version. This is the bit that's puzzled me. It all appears to have installed fined. The tests work fine, passing 40 out of 40. If I run identify -version on the command line it outputs 6.8.9 But if I echo Imagick::getVersion() in PHP it shows 6.5.4, even after restarting php-fpm. rpm -qa | grep ImageMagick shows that I still have 6.5.4 installed locate ImageMagic also only seems to show the 6.5.4 one I feel that the missing link here is ImageMagick-devel, do I need to install that too? How do I go about doing that? Or do I just need to reinstall the pecl-imagemagick 3.2.0RC1 now that I have the latest IM installed?

    Read the article

  • What should I use to ping multiple IPs and get notified of time outs?

    - by HumanVirus
    I've been using MultiPing to ping hundreds of IPs (from access points and such) and check their performance (packet loss, latency) and uptime. The program is very easy to use, but I was wondering if someone could recommend me something that would work better and that would also work in Linux. The features I'm looking for are: Notification Types: At least desktop notifications and SMS, but it would be great if it also had e-mail, IM, or other types of notifications. (MultiPing has some of these, but they don't work too well.) Being notified about the root problem only: Since some devices are dependent on others, I'd like to be notified only about the root problem. E.g. Let's say I have A[x.x.x.222]B[x.x.x.33C[x.x.x.44]D[x.x.x.55], and B goes down, therefore C and D will also be down. Is it possible to get a notification only about B being down? Light on resources. Ideally multiplatform or at least available for both Linux and Windows. I've heard about Nagios and Shinken being used for monitoring. Would you recommend that I use something of the sort or would that be too much for my needs? If using Nagios, Shinken, or similar software is recommended, can anyone tell me what sites I should go to or what books I should get that would be good for someone who is totally new at this? I'd appreciate any suggestions.

    Read the article

  • VirtualBox bridged network not working as expected

    - by iby chenko
    I am having hard time getting Bridged network to work with VirtualBox. Idea is to have host as well as one or more guests on same LAN. Using NAT (default) I do get access to internet and any node on the LAN when working from one of the VM guests. However, no LAN node including host can access (or ping) guest in VM. I need to be able to use any guest as if it was a physical computer on the network (need to be accessed by any machine on LAN). According to my understanding of the VirtualBox documentation, this should be Bridged mode. I think I set it correctly, well, actually there is not much to it: 1. select Bridged mode in VM network setup 2. select physical NIC of the host to connect bridge to 3. start VM When I do this, each VM does get new IP address that corresponds to LAN settings : 192.168.1.100 192.168.1.102 192.168.1.103 etc. where host is 192.168.1.80 / 255.255.255.0 (IP addresses above 100 are served by DHCP server). This seem to be correct based on what I know about ethernet. From VM I can ping other nodes like 192.168.1.50 etc. and I still get ethernet access. So far so good... But I STILL cannot ping any of the other VMs (running ones of course). I cannot ping them from other VMs, from host or from other nodes on the LAN. Aside from fact that IP addresses handed to guests are now local, this still acts same as NAT. What is going on? What am I missing? Regards, I

    Read the article

  • How can I get DVDs playing after a Vista to XP change?

    - by Liath
    I replaced my vista install on a Dell Inspiron 1525 with XP and have managed to get most things up and running again however I'm having trouble with playing DVDs. When I try and play a DVD I get the following message: Windows Media Player cannot play this DVD because there is a problem with digital copy protection between your DVD drive, decoder, and video card. Try installing an updated driver for your video card. I have ensured that my drive is configured to play Region 2 discs (I'm in the UK), I've installed the most up to date XP codec pack which makes me think it's a driver issue. In device manager I have got my DVD drivers up to date however under "Other Devices" I'm missing several which sound key: Audio Device on High Definition Audio Bus Modem Device on High Definition Audio Bus Video Controller Video Controller (VGA Compatible) However I've installed all the relevant drivers I can find on the Dell website. The drive itself is working - I've run software from the drive. I'm afraid I am far from a sys-admin so I'm struggling on this one. How can I get my DVDs playing again?

    Read the article

  • ESX 3.5 refuses to update

    - by Speeddymon
    I have a set of ESX 3.5 servers in 2 different datacenters. One is DR, one is production. They are on the same vlan and so I can access any of them on the private network from my vCenter server. Last month, as a learning experience (I hadn't dealt with ESX much before), I updated the DR server. Other than finding out that a couple of bundles had to be installed manually in order to get the rest to install from vCenter, it went off without a hitch. Now, I'm trying to do the same for our production servers and it is not working. I've googled around for the error I get during scan, and investigate loads of different solutions (editing the integrity file, checking DNS, etc) -- I did install the 2 bundles that had to be installed manually already -- but scan from vCenter is just not working. Side note: I did just scan the DR server again and that scan works fine so shouldn't be a problem with vCenter that has cropped up recently -- it has to be something else. The error I get is: Patch metadata for (servername) missing. Please download updates metadata first. Failed to scan (servername) for updates. I'm all out of ideas on how to make this work, so any help would be hugely appreciated.

    Read the article

  • htaccess rewriting all subdomains to subdirectories

    - by indorock
    I'm trying to build a catch-all for any subdomains (not captured by previous rewrite rules) for a certain domain, and serve a website from a subdirectory that resides in the same folder as the .htaccess file. I already have my vhosts.conf to send all unmapped requests to a "playground" folder, where I want to easily create new subdomains by simply adding a subfolder. So, my structure looks like this: /var/www/playground |-> /foo |-> /bar The .htacces living inside the /playground folder and /foo and /bar being seperate websites. I want http://foo.domain.com to point to /foo and http://bar.domain.com to /bar. Here is my .htaccess file: RewriteEngine On RewriteCond %{HTTP_HOST} ^([^.]+).domain.com$ [NC] RewriteCond %{REQUEST_URI} !^/%1/(.*) RewriteRule ^(.*) /%1/$1 [L] This is supposed to capture the subdomain, add it as a subfolder in RewriteRule, then append after the slash and path information. The second RewriteCond is there to prevent an infinite loop. My idea was that %1 in the second RewriteCond would be able to capture the capture group in the first RewriteCond. But so far I haven't had any success, it's always ending up in a redirect loop. If I would replace %1 in the second RewriteCond with hardcoded 'foo' or 'bar', it works, which leads me to believe that you cannot refer to a capture group inside a RewriteCond. Is is true? Or am I missing something?

    Read the article

  • How do I serve Ruby on Rails applications on Windows Server 2008?

    - by Adam Lassek
    I have spent the last several hours attempting to get Ruby on Rails running on a Windows server with no luck. At first I tried configuring a test application through IIS7's FastCGI support, but the documentation for this is not very good. I've been following this blog entry, and this one, and this one, and this one but everything seems to be missing major steps, or are out of date. And every article keeps linking back to this Howto from rubyonrails.org that doesn't exist. The sense that I'm getting is that even if I manage to make this work, IIS' FastCGI isn't good enough to use in a production environment anyway. So it looks like my best bet is to setup a reverse proxy in IIS that points to Apache & Mongrel/Passenger using ARR and UrlRewrite. Is there anybody else out there stuck deploying a Rails application on a Windows stack? Am I on the right track? Can you give me a better idea of how to configure this? I believe Plesk already installed an instance of Apache/Tomcat running on this server using a different port, so adding another virtual host shouldn't be difficult; the hardest part seems to be setting up the reverse proxy through IIS.

    Read the article

  • Weblogic WLST classpath

    - by user43736
    When I run the WLST .sh script to set the env as follows why can't I see the updated path when I do echo? [linbox2 bin]$ ./setWLSEnv.sh CLASSPATH=/directory/ols_wls/patch_wlss1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_wls1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_oepe1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_ocm1031/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/jrockit_160_14_R27.6.5-32/lib/tools.jar: /directory/ols_wls/utils/config/10.3/config-launch.jar: /directory/ols_wls/wlserver_10.3/server/lib/weblogic_sp.jar: /directory/ols_wls/wlserver_10.3/server/lib/weblogic.jar: /directory/ols_wls/modules/features/weblogic.server.modules_10.3.2.0.jar: /directory/ols_wls/wlserver_10.3/server/lib/webservices.jar: /directory/ols_wls/modules/org.apache.ant_1.7.0/lib/ant-all.jar: /directory/ols_wls/modules/net.sf.antcontrib_1.0.0.0_1-0b2/lib/ant-contrib.jar: PATH=/directory/ols_wls/wlserver_10.3/server/bin: /directory/ols_wls/modules/org.apache.ant_1.7.0/bin: /directory/ols_wls/jrockit_160_14_R27.6.5-32/jre/bin: /directory/ols_wls/jrockit_160_14_R27.6.5-32/bin: /usr/kerberos/bin: /usr/local/bin: /bin: /usr/bin: /usr/X11R6/bin: /usr/java/j2sdk1.4.2_11/bin/bin: /home/oracle/bin: /directory/wls_olwcs/jdk160_14_R27.6.5-32/bin: /directory/ccanywhere81/bin:/directory/oracle/oracle/product/10.2.0/client_1/bin Your environment has been set. [linbox2 bin]$ export CLASSPATH [linbox2 bin]$ export PATH [linbox2 bin]$ echo $PATH /usr/kerberos/bin: /usr/local/bin: /bin: /usr/bin: /usr/X11R6/bin: /usr/java/j2sdk1.4.2_11/bin/bin: /home/oracle/bin: /directory/wls_olwcs/jdk160_14_R27.6.5-32/bin: /directory/ccanywhere81/bin: /directory/oracle/oracle/product/10.2.0/client_1/bin [linbox2 bin]$

    Read the article

  • Lost user account for Windows Vista

    - by annelie
    Hello, I'm trying to help a friend who's lost her user account in Vista. I know there's supposed to be a way you can boot the computer from the vista installation disc and create an admin account you can later login with, but her installation disc is in Australia and her laptop in London. Is there any other way to get in? Or would it be better to try and access just the harddrive? She's mainly concerned with getting all her data off it. As for how she lost the account, I'll let her explain in her words. :) My computer basically got some virus and now is up sh*t creek. it told me i had this cryptic thingy majiggy was missing and then this fake virus told me i needed to scan my computer. SO i tried to do malware thing but it kept shutting my computer down. ANYWAY...now its it will only open up with 'launch startup repair' and has got rid of my settings for logging in and wants me to be 'other user' which i have no password or username for'...so basically im stuffed. This is Windows Vista by the way. Thanks, Annelie

    Read the article

  • Is there any way to get spotlight or media browser in OSX (Snow Leopard) to index and recognize meta

    - by jaydles
    It seems silly to go to all the trouble to assign "Face" data to thousands of photos, but not make it possible to use that data to locate them outside of that application. I know that that metadata is stored in the "library" database for Aperture/iphoto, rather than on the actual files (which is too bad). And I can even potentially see why it might create challenges for spotlight to use it, since spotlight if presumably a file index system, not a media organizer, but surely the media browser used across the other OSX apps is intended to use it? The media browser's whole purpose seems to be to let you easily locate and reference the items you organize in one of the ilife apps (iphoto or Aperture, in this case) from the others (say, imovie, or Mail). It's particularly vexing since the photo app on the iphone sorts by faces by default. Additionally, the mac-based media browser does access smart albums and folders, so you could establish a workaround by creating a smart album for each "face" or place, or tag, and access them that way, but it seems like there must be an easier way. Am I missing something?

    Read the article

  • Kindle (client) for Mac--text search or highlighting/notes?

    - by doug
    just so we're clear, i'm talking about the client/software version here--ie, that you install on your Mac or PC--not the device. The Kindle client was recently released for the Mac. I downloaded it and bought a couple of Kindle-edition books to view on this client. Astonishingly, two features i consider to be more or less essential to any ebook reader are missing in the Kindle client, either that, or i can't find them: (i) text searching; and (ii) highlighting text. First, does anyone know how to access the search feature? I'm aware of the "Go To" button at the top middle of the reader window--the options in that menu when you click the button are: "Cover", "Table of Contents", "Beginning" and "Location." "Location" requires that you type in an integer (but it doesn't correspond to page number--e.g., typing "167" brought me to the table of contents), not a search term. Second, there's a button on the upper right-hand corner of the window "Show Notes and Marks" yet i can't find any way to highlight text. The only kind of "note" or "mark" i have been able to record is to "bookmark" a page by clicking the "bookmark" button also at the top of the window.

    Read the article

  • Setup Apache with IPv6

    - by mrz
    I have two virtual machine on my computer. I have Apache installed on one of them (would be referred to as "server" after this), and I have set the Apache to listen to an IPv6. And when I enter the IPv6 into web browser on the "server" I see my index.html file. so far so good ... I want to be able to open a web browser on the other virtual machine("client") and see the index.html. But, when I try entering the IPv6 of the "server" in a web browser on the "client" I get an Unable to establish a connection to the server error. I can ping6 "client" from "server" and vice verse. There is only one thing to mention. ifconfig of the "server" shows 3 different IPv6 which two of them are scoped Global and there is one Link scope IPv6. On the "client" there is only one Link scope IPv6 though. I only can ping the Link IPv6s. Pinging other IPv6s would result connect:Network is unreachable. And if I set Apache to listen to Link IPv6, The rcapache2 start will fail the job. Any thoughts on what I am probably missing/doing wrong?

    Read the article

  • Server redirect

    - by Tyy
    I have asp.net app XYZ which requires SSL. I am supposed to work with IIS that has only one Web Site - DefaultWebSite - containing multiple apps and virtual directories (3rd party). So, my app is located at domain.com/XYZ. I have to meet these conditions: 1.) requesting DefaultWebSite (domain.com) will run my app. It could redirect to ../XYZ, but I would rather not to. If it has to be done this way, requesting DefaultWebSite or my app over both HTTP and HTTPS will always ends up with redirecting to https://domain.com/XYZ 2.) I can't touch any other apps or virtual directories, can't create additional Web Site, can't set DefaultWebSite to require SSL, ... EDIT: 3.) transmitted data (GET or POST) must be preserved I tried to: set Web Site root directory to my app, but it this caused other apps to crash because of my Web.config (not sure why). set up HTTP redirect on DefaultWebSite to https://domain.com/XYZ. This seems to work correctly, but this doesn't work if user requests my app directly (redirected to domain.com/XYZ/XYZ, or redirect loop). set up Default Document, but this seems to work only if it is located in the Web Site root directory. I know I could write simple .aspx with Response.Redirect, but... is there any better solution? Am I missing something?

    Read the article

  • IIS Web Farm Framework servers are automatically set to "unavailable" even when they are healthy... And they never return to the available state!

    - by JohannesH
    I have 2 web farm configurations, one with 2 member servers and one with 3 member servers. I have health monitoring set up on both farms and the monitoring tool reports all servers as being healthy. However after a while all the servers are marked as being "Unavailable" and "Healthy" in the "Monitoring and Management" screen (in the "Servers" screen they are all listed with "Yes" in the "Ready for Load Balancing" column). Viewing the event log on both the web farm controller or any of farm servers doesn't reveal anything interesting. there are no warnings or errors in the period where the servers became unavailable. There are a couple of informational events about the worker process getting shut down due to inactivity but I don't hope this is the cause since that would mean that the farms will die during the night when the load is low. Am I missing something? EDIT: Btw, I think its very odd that the application pool shuts down on the servers since the health monitoring system is polling an aspx page on each server. Shouldn't that keep them going? EDIT2: Now I've also experienced this problem with the RTW version of Web Farm Framework 2.

    Read the article

  • How to "open" existing VMs in Hyper-V without importing them?

    - by Borek
    I had a PC with two physical disks: C: containing the host operating system D: containing a folder D:\VMs where all my virtual machines were stored Now, the C: disk died. I bought a new one, reinstalled Windows on it, enabled Hyper-V feature and now I just need to open the VMs from the D:\VMs folder. However, I don't seem to be able to find a menu item or anything that would allow me to do that - the only thing I see is the "import" command which unfortunately requires the VMs to be explicitly exported (my machines weren't). I firmly believe that when I have all the files constituting a VM (the VHD file, some XML files describing the settings etc.) it must be somehow possible to just "open" these existing VMs in Hyper-V, right? What command am I missing? Edit: I know I can create a blank virtual machines and then just point them to use existing VHDs. However, I am not sure about all the different settings I've made to those VMs so I hope there's a way to simply open those existing VMs instead of recreating them.

    Read the article

  • CentOS: AJAX/jQuery not working

    - by Australiya
    In a nutshell, I have an unmanaged VPS. At one time, it had Ubuntu 10.10 server on it, then I reinstalled it with CentOS 6 and updated it to CentOS 6.2. Now, the problem is, the AJAX/jQuery shoutbox has ceased working (I assume it uses one of the two to inject itself into a div and then refresh when new messages are posted, I'm not sure, I didn't write these), and the plug-board script now shows me a lime green blank page. No changes to the source codes have been made, and they are in the location they expect themselves to be. I have Apache2, MySQL 5, PHP 5, and I did install the php-xml libraries. What am I missing? It's gotta be server side because the scripts themselves are fine, if I move them to a different server they work just fine. I'm not getting any errors related to this in the error_log file. Thanks in advance! Edit: If you want, you can look at the plugboard at kazeshini.net/plugboard and there's an installation of the chatbox at silverlotus.kazeshini.net/yshout/example, I know nothing about scripts and debugging so better someone else looks at it than someone that doesn't know what they're looking for.

    Read the article

< Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >