Search Results

Search found 29159 results on 1167 pages for 'xml configuration'.

Page 458/1167 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • AWS:EC2:: Could not connect FTP client?

    - by heathub
    My Server OS: Amazon Linux I am trying to set up ftp. I have: Installed vsftpd open port 20-21 open port 1024 - 1048 Basically, I followed every of these steps Start vsftpd service (the status indicate [ok]) I use filezilla for my ftp client. Here is my setting/configuration: Host: ec2-XX-XX-XXX-XX.compute-1.amazonaws.com Port: -(blank, but I have tried 20 and 21 though) Server Type: FTP - File Transder Protocol Logon Type: Normal Username: (tried root and ec2-user) Transfer mode: Tried passive and active I always has this error: Status: Waiting to retry... Status: Resolving address of ec2-XX-XX-XXX-XX.compute-1.amazonaws.com Status: Connecting to XX.XX.XXX.XX:21... Error: Connection timed out Error: Could not connect to server Have I missed any configuration/settings? EDIT After execute the /sbin/iptables -L -n Here is the result: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Driver for asus wireless PC card WL-100GE

    - by emab
    I have bought an Asus wireless LAN PC Card WL-100GE model, and I am using lubuntu on my system. While I have no cable connection, currently I cannot access the internet and update my laptop. Device: Broad Range Wireless Card Bus Adaptor - Asus - WL-100GE I searched the web and couldn't find any adequate driver for it. Is there any solution for it? My sudo lshw -C network output is: *-network description: Ethernet interface product: RTL-8100/8101L/8139 PCI Fast Ethernet Adapter vendor: Realtek Semiconductor Co., Ltd. physical id: 3 bus info: pci@0000:02:03.0 logical name: eth0 version: 10 serial: 00:02:3f:ba:55:c8 size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=8139too driverversion=0.9.28 duplex=half latency=128 link=no maxlatency=64 mingnt=32 multicast=yes port=MII speed=10Mbit/s resources: irq:19 ioport:3000(size=256) memory:e0200800-e02008ff *-network description: Network controller product: BCM4318 [AirForce One 54g] 802.11g Wireless LAN Controller vendor: Broadcom Corporation physical id: 1 bus info: pci@0000:07:00.0 version: 02 width: 32 bits clock: 33MHz capabilities: bus_master configuration: driver=b43-pci-bridge latency=64 resources: irq:21 memory:38000000-38001fff ----:~$ iwconfig lo no wireless extensions. eth0 no wireless extensions.

    Read the article

  • Server monitoring for medium scale UNIX network

    - by nbartolomeo
    I'm looking for suggestions for a good monitoring tools, or tools, to handle a mixed Linux (RedHat 4-5) and HPUX environment. Currently we are using Hobbit which is working reasonably well but it is becoming harder to keep track of what alerts are sent out for what servers. Features I'd like to see: Easy configuration of servers. The ability to monitor CPU, network, memory, and specific processes I've looked into Nagios but from what I have seen it won't be easy to set up the configuration for all of our servers ~200 and that without installing a plugin into each agent I won't be able to monitor processes.

    Read the article

  • Ubuntu 13.10, kernel 3.11 blank screen issue with hybrid graphics

    - by Lagerbaer
    On my HP Envy, which has both an Intel on-chip graphics card and an Nvidia Geforce: *-display UNCLAIMED description: 3D controller product: GK208M [GeForce GT 740M] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: cap_list configuration: latency=0 resources: memory:d2000000-d2ffffff memory:a0000000-afffffff memory:b0000000-b1ffffff ioport:5000(size=128) memory:b2000000-b207ffff *-display description: VGA compatible controller product: 4th Gen Core Processor Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 06 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:46 memory:d3000000-d33fffff memory:c0000000-cfffffff ioport:6000(size=64) I have trouble with all newer kernels. I basically had to install 12.04 LTS and use their 3.5 kernel family to get the system to boot. The 3.8 from 12.10 or the newest 3.11 from Ubuntu 13.10 leave me with a black screen upon boot. On one occasion I did hear the "log in" sound, but the screen did not display anything. I have purged all nvidia drivers so I guess it should just use the intel drivers, but apparently this is all messed up with newer kernel versions. This is different from the other "nvidia boots into blank screen" bug in that I don't rely solely on an nvidia card. Surely the intel on-chip card should be supported and leave me with something different from a blank screen? Again, it only works with kernel versions 3.5.0-41-generic, not with the 3.11.0-12 one that ships with Ubuntu 13.10. When I go into the grub menu and change the boot options from 'quiet splash' to 'nomodeset' I am able to boot the system, but then I don't get any graphics and trying 'sudo service lightdm start' doesn't succeed (I get 100% CPU for apport, but this doesn't do anything either, so I kill it). Help, I'm all out of ideas. EDIT: Let me add that I'm using the EFI boot system and have a dual-boot installation with Windows 8.

    Read the article

  • Can only bring up one of two interfaces

    - by mstaessen
    I'm having a bizarre issue with my HP Proliant DL 360 G4p server. It has two gigabit ethernet interfaces but I can bring up only one of them. This is starting to freak me out and that's why I turned here. I'm running the x64 ubuntu 11.10 server edition. lshw -c network shows that the second interface is disabled. I have no idea why ans how to enable it. $ sudo lshw -c network *-network:0 description: Ethernet interface product: NetXtreme BCM5704 Gigabit Ethernet vendor: Broadcom Corporation physical id: 2 bus info: pci@0000:02:02.0 logical name: eth0 version: 10 serial: 00:18:71:e3:6d:26 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 66MHz capabilities: pcix pm vpd msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.119 duplex=full firmware=5704-v3.27b, ASFIPMIc v2.36 ip=10.48.8.x latency=64 link=yes mingnt=64 multicast=yes port=twisted pair speed=100Mbit/s resources: irq:25 memory:fdf70000-fdf7ffff *-network:1 DISABLED description: Ethernet interface product: NetXtreme BCM5704 Gigabit Ethernet vendor: Broadcom Corporation physical id: 2.1 bus info: pci@0000:02:02.1 logical name: eth1 version: 10 serial: 00:18:71:e3:6d:25 capacity: 1Gbit/s width: 64 bits clock: 66MHz capabilities: pcix pm vpd msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.119 firmware=5704-v3.27b latency=64 link=no mingnt=64 multicast=yes port=twisted pair resources: irq:26 memory:fdf60000-fdf6ffff If I try to ifup eth1, then I get $ sudo ifup eth1 Ignoring unknown interface eth1=eth1. I figured that's what happens when there is no eth1 listed in /etc/network/interfaces. But when I add the configuration for eth1, I still can't ifup. $ sudo ifup eth1 RTNETLINK answers: File exists Failed to bring up eth1. I've also tried ifconfig eth1 up but without any result. For clarity, I have added a masked version of /etc/network/interfaces. I don't think it is the cause of the problem though. $ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 10.48.8.x netmask 255.255.255.y network 10.48.8.z broadcast 10.48.8.t gateway 10.48.8.u auto eth1 iface eth1 inet static address 193.190.253.x netmask 255.255.255.y network 193.190.253.z broadcast 193.190.253.t gateway 193.190.253.u I really need some help fixing this. It's driving me crazy. Thanks.

    Read the article

  • How-To: Run CMSDK against a RAC cluster

    - by frank.closheim
    Using CMSDK in a production environment often requires a robust, reliable and failover enabled repository. When using Oracle Real Application Cluster (RAC) with your CMSDK repository you need to have a specific configuration in place to support such a setup. This post will explain the configuration steps required when running CMSDK 9.0.4.6 with Oracle WebLogic Server (WLS).In the previous CMSDK 9.0.4.2 version a RAC enabled connect string looked like this: (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = rac2)(PORT = 1521))(LOAD_BALANCE = NO)(FAILOVER = ON)(CONNECT_DATA =(SERVICE_NAME = rac)(failover_mode = (type=select)(method=basic)))CMSDK 9.0.4.6 makes use of data sources to connect to the underlying database. These data sources are configured inside your Application Server, such as Oracle WebLogic Server.In Oracle WebLogic Server 10.3.4, a single data source implementation has been introduced to support an RAC cluster. It responds to Fast Application Notification (FAN) events to provide Fast Connection Failover (FCF), Runtime Connection Load-Balancing (RCLB), and RAC instance graceful shutdown. XA affinity is supported at the global transaction Id level. The new feature is called WebLogic Active GridLink for RAC; which is implemented as the GridLink data source within WebLogic Server.This GridLink data source also works with Oracle Single Client Access Name (SCAN). SCAN is a feature used in RAC environments that provides a single name for clients to access any Oracle Database running in a cluster. You can think of SCAN as a cluster alias for databases in the cluster. The benefit is that the client’s connect information does not need to change if you add or remove nodes or databases in the cluster.The CMSDK 9.0.4.6 documentation describes how to create a regular JDBC data source named jdbc/OracleDS. Please refer to the following document which describes in detail how to create a GridLink data source in WLS.

    Read the article

  • How to customize live Ubuntu CD?

    - by karthick87
    I would like to customize Ubuntu live CD by installing some additional packages. I have followed this link but it doesn't seems to work. Can anyone provide clear instructions? Thanks in advance! Customization Packages that I want to install: Thunderbird Samba SSH Changes that I need: Remove Games menu from the Application menu Firefox shortcut on desktop Radiance as the default theme Different default Ubuntu wallpaper Note I do not prefer Remastersys, manual way will be appreciated. updates I want the panel to be placed at the bottom. I want to paste my samba configuration file instead of default samba configuration. And i have few firefox shortcuts and Folders, i have to show that in Desktop. Also it will be nice if you say me how to change the icon sets. Recent Updates I have customized ubuntu 10.10 with firefox shortcuts and few folders on desktops. Everything went smooth. But the installer gets crashes after choosing the timezone. How do i fix this issue? Also setting wallpaper affects the login screen. The wallpaper which i set is displayed on the login screen also. I just want the default one for the login screen.

    Read the article

  • Logs being flooded from Squid for having intercepted and authentication enabled together

    - by Horace
    I have done some hefty Google'ing and I can't seem to find a single solution to this issue that I cam currently experiencing. Here is a sample configuration from squid that I have: # # DIGEST Auth # auth_param digest program /usr/sbin/digest_file_auth /etc/squid/digpass auth_param digest children 8 auth_param digest realm LHPROJECTS.LAN Network Proxy auth_param digest nonce_garbage_interval 10 minutes auth_param digest nonce_max_duration 45 minutes auth_param digest nonce_max_count 100 auth_param digest nonce_strictness on # Squid normally listens to port 3128 # Squid normally listens to port 3128 http_port 192.168.10.2:3128 transparent https_port 192.168.10.2:3128 intercept http_port 192.168.10.2:3130 As noted above, I have three ports defined, 2 of them are transparent/intercept and one is a regular http port (which I use for authentication). Which works rather well in this configuration however my logs are getting flooded of this entry authentication not applicable on intercepted requests whenever a transparent connection is made. So far, I can't seem to find any documentation that would describe how to suppress these messages ?

    Read the article

  • Configuring weblogic server console with external server urls, etc

    - by MeBigFatGuy
    there are obviously various 'canned' configuration options in oracle's weblogic server console for setting up data sources, jms queues, ldap servers etc, etc. What i want however is a way to configure other servers, mostly server urls, etc, in the console as well, and allow web applications running on the web server to access those configuration settings at runtime, probably through jndi names. Things like a document management server, a workflow server, etc. However I'm at a loss for how to configure custom jndi 'data sources' within wls' console. Is this possible?

    Read the article

  • How to troubleshoot git "unable to set permission" on adding project?

    - by Brian Knoblauch
    Finally decided to move from Subversion to Git, but am having problems with my first project. Did my "git init" and am trying to do a "git add" of my project, but it's failing with: $ git add . error: unable to set permission to '.git/objects/6b/6018c1c76dc5ec159d5cb65bab72 fa300d52f6' error: build.xml: failed to insert into database error: unable to index file build.xml fatal: adding files failed I have full permissions to the directories in question. The only odd thing about it is that it's a drive mounted (and mapped) from a server over CIFS. No problems creating/editing files/permissions with other applications. The host is Windows Vista x64 and I'm running git under Cygwin. Server is Windows 2008. Any other ideas on what I might be doing wrong?

    Read the article

  • Lenovo v570 ubuntu 12.04 wireless hard blocked even when ext. switch is on

    - by user100987
    When I run iwconfig it say's lo and eth0 have no wireless extensions, but wlan0 it says IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS the:off Fragment the:off Power Management:off and I believe that's my problem, I just don't know how to turn it back on Any help? When I ran lspci | grep Network it gave me this 02:00.0 Network controller: Intell Corporation Centrino Wireless -N + WiMAX 6150 (rev 67) How I know that my wireless is hard blocked because when I run sudo rfkill list all I get 0: Ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: yes when I run lshw -c network I get. *-network DISABLED description: Wireless interface product: Centrino Wireless-N + WiMAX 6150 vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: mon1 version: 67 serial: 40:25:c2:d2:96:2c width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list logical wireless ethernet physical configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-32-generic-pae firmware=41.28.5.1 build 33926 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:43 memory:d0500000-d0501fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 06 serial: f0:de:f1:d7:a0:4d size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=192.168.0.65 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:2000(size=256) memory:d0404000-d0404fff memory:d0400000-d0403fff

    Read the article

  • How to enable systemd instantiated service with puppet?

    - by Richard Pena
    I've got the following puppet service: service { "[email protected]": provider => systemd, ensure => running, enable => true, } When I try to apply this configuration on my client, it throws the following error: err: /Stage[main]//Node[puppetclient]/Service[[email protected]]/enable: change from false to true failed: Could not enable [email protected]: The service is running fine and I can make sure it's started on system boot by adding a symlink to getty.target.wants: ln -s /lib/systemd/system/[email protected] /etc/systemd/system/getty.target.wants/[email protected] Of source, I could go ahead and remove "enable = true" from the service definition and include a the symlink manually in the puppet configuration, but shouldn't puppet take care of this? Am I doing something terribly wrong?

    Read the article

  • How to restrict ssh port forwarding, without denying it?

    - by Kaz
    Suppose I have created an account whose login shell is actually a script which does not permit an interactive login, and only allows a very limited, specific set of commands to be remotely executed. Nevertheless, ssh allows the user of this account to forward ports, which is a hole. Now, the twist is that I actually want that account to set up a specific port forwarding configuration when the ssh session is established. But it must be impossible configure arbitrary port forwarding. (It is an acceptable solution if the permitted port forwarding configuration is unconditionally established as part of the every session.)

    Read the article

  • EDQ Technical Enablement for OPN (Prague - June 17-19)

    - by milomir.vojvodic
    Oracle Enterprise Data Quality (EDQ) Technical Enablement and Partner Training Trusted Data for Your Enterprise Applications Oracle Enterprise Data Quality helps organizations achieve maximum value from their business-critical applications by delivering fit-for-purpose data. These products also enable individuals and collaborative teams to quickly and easily identify and resolve any problems in underlying data. With Oracle Enterprise Data Quality, customers can identify new opportunities, improve operational efficiency, and more efficiently comply with industry or governmental regulation. Oracle Enterprise Data Quality is designed to serve as a very channel friendly platform to OPN.  This means that pre-built extensions, components and even complete business solutions can readily be built and shared.  This allows our customers/partners to be highly efficient in how they deploy custom business solutions, but also allows our partners to develop specialized components, domain knowledge and even complete business solutions. Training is suitable for: · Database administrators · Architects · Technical staff Objectives of the training: After completing this course, participants should: · Have an understanding of the core functionality of EDQ across profiling, auditing, transforming, parsing and matching data · Be able to describe some of the key capabilities and benefits delivered by EDQ · Be able to create and run standalone EDQ processes and jobs · Be ready to start working with data from customers and (with practice) be able to demonstrate EDQ to customers Agenda 17th June Fundamentals For Demoing (Profile, Audit, Transform and More) Profiling Auditing Transforming Writing and exporting data Jobs and scheduling Publishing, packaging and copying EDQ processes Introduction to the Customer Data Extension Pack Realtime Processing via Web Services The Server Console Run Profiles Data Interfaces Sampling Publishing metrics to the Dashboard Users and security 18th June Matching Matching overview Basic matching configuration Matching rule hierarchies Clustering Merging Reviewing possible matches Outputting Match Data Case study 19th June Address Verification Address Verification Overview Configuration Accuracy Flags Parsing Parsing Overview Phrase profiling Tailoring a CDEP Parser Base Tokenization Classification Reclassification Selection Resolution Register Here Don’t miss this FREE event. Space is limited. Oracle University V Parku 2294/4 148 00 Praha 4 17.6. – 19.6. 2014 09:00 a.m.– 17:30 p.m.

    Read the article

  • SSH automatic logon works for one user but not the other

    - by tinmaru
    I want to enable automatic ssh login using the .ssh/config file for my git user. Here is my .ssh/config file: Host test HostName myserver.net User test IdentityFile ~/.ssh/id_rsa Host git HostName myserver.net User git IdentityFile ~/.ssh/id_rsa It works for my test user but not for my git user so my global SSH configuration is correct. The configuration are exactly the same as far as I know. It used to work with git user but I'm unable what change has broken the automatic logon. When I type: ssh -v git I get the following log: ... debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey Offering RSA public key: /Users/mylocalusername/.ssh/id_rsa debug1: Authentications that can continue: publickey,password debug1: Next authentication method: password [email protected]'s password: _ Does anyone know what could be a possible difference

    Read the article

  • OpenBSD Routing Problem

    - by Ozkan SENOVA
    I am running OpenBSD on a network appliance hardware. It has 5 NICs. I want to give different IP's in same subnet to 3 nics. Eg: em0: 192.168.1.5 em1: 192.168.1.90 em2: 192.168.1.56 I make the necessary configuration with ifconfig, all interfaces works as expected when all the ethernet ports are plugged in to switch. But there is something wrong in routing. If I connect to 192.16.1.5 via any service(http, smtp etc.), traffic goes over link#3. If I unpug the cable from em2 I can't reach any IP's binded on device. Is there any way to route traffic over different links in this IP configuration?

    Read the article

  • Allowing users in from an IP address without certificate client authentication

    - by John
    I need to allow access to my site without SSL certificates from my office network and with SSL certificates outside. Here is my configuration: <Directory /srv/www> AllowOverride All Order deny,allow Deny from all # office network static IP Allow from xxx.xxx.xxx.xxx SSLVerifyClient require SSLOptions +FakeBasicAuth AuthName "My secure area" AuthType Basic AuthUserFile /etc/httpd/ssl/index Require valid-user Satisfy Any </Directory> When I'm inside network and have certificate - I can access. When I'm inside network and haven't certificate - I can't access, it requires certificate. When I'm outside network and have certificate - I can't access, it shows me basic login screen When I'm outside network and haven't certificate - I can't access, it shows me basic login screen and following configuration works perfectly <Directory /srv/www> AllowOverride All Order deny,allow Deny from all Allow from xxx.xxx.xxx.xxx AuthUserFile /srv/www/htpasswd AuthName "Restricted Access" AuthType Basic Require valid-user Satisfy Any </Directory>

    Read the article

  • Can't change folder background

    - by newcomer
    I tried to change via dragging from the Backgrounds and Emblems window, but the icon just goes back to that window rather than changing the folder background.However, I can change the task bar by this drag-n-drop. Probably it is something about changing ownership permission? if so how to change that? In /home/mashruf/.gconf/apps/nautilus/preferences/%gconf.xml file it says:, Should I change this file? how? <?xml version="1.0"?> <gconf> <entry name="click_policy" mtime="1297597800" type="string"> <stringvalue>single</stringvalue> </entry> <entry name="default_folder_viewer" mtime="1297597336" type="string"> <stringvalue>list_view</stringvalue> </entry> <entry name="media_autorun_x_content_open_folder" mtime="1297534321" type="list" ltype="string"> </entry> <entry name="media_autorun_x_content_ignore" mtime="1297534321" type="list" ltype="string"> </entry> <entry name="media_autorun_x_content_start_app" mtime="1297534321" type="list" ltype="string"> <li type="string"> <stringvalue>x-content/software</stringvalue> </li> </entry> <entry name="start_with_location_bar" mtime="1297300028" type="bool" value="true"/> <entry name="side_pane_view" mtime="1297269334" type="string"> <stringvalue>NautilusTreeSidebar</stringvalue> </entry> <entry name="navigation_window_saved_maximized" mtime="1297600306" type="bool" value="false"/> <entry name="navigation_window_saved_geometry" mtime="1297600306" type="string"> <stringvalue>964x608+59+2</stringvalue> </entry> <entry name="sidebar_width" mtime="1297390418" type="int" value="192"/> </gconf>

    Read the article

  • SEO, IIS 7 and web.config in subfolder issue

    - by tesicg
    We have ASP.NET application that has sub-folder with .aspx pages and separate web.config file in it. The .aspx pages in that sub-folder behave as separate site. In the web.config file at application level, I set the rule that removing trailing slashes: <rewrite> <rules> <rule name="RemoveTrailingSlashRule1" stopProcessing="true"> <match url="(.*)/$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="{R:1}" /> </rule> </rules> </rewrite> I expect this rule will propagate downward to sub-folder as well. To access the site in sub-folder we should type: http://concert.local/elki/ and get it without trailing slash as: http://concert.local/elki But, the trailing slash remains. The web.config file in sub-folder looks as following: <configuration> <system.webServer> <defaultDocument> <files> <add value="Sections.aspx" /> </files> </defaultDocument> </system.webServer> </configuration>

    Read the article

  • Raid recovery in gigabyte GA-8I945 Pro

    - by epeleg
    This was a working machine until a few days ago. And now it won't boot into the OS, during startup if makes clicking sounds (I think from one of the drives). Installed OS: Windows 2003 Web edition Hardware: Gigabyte GA-8I945P Pro , 2*160G Sata in RAID1 configuration , 2 Volumes – 25G and the rest. When I installed windows on it, during setup, I pressed F6 and used ICH7DH drivers of RAID. The manual for the MOBO says: Step 1: After the POST memory test begins and before the operating system boot begins, look for a message which says "Press to enter Configuration utility" (Figure 4). Press CTRL+ I to enter the RAID BIOS setup utility. But the machine never shows this message. BIOS SATA RAID/AHCI Mode is set to RAID. Any ideas or pointers on what I can do to recover my data? Thanks

    Read the article

  • 3 screens on W500 + ATI V5700 + docking station

    - by rafek
    I've got Lenovo W500 with D-SUB and DVI ports. Most of the time I work with a docking station which has D-SUB and DVI ports, as well. I used to have laptop + 22" monitor (DVI) configuration. Now I've got laptop + 22" (DVI) + 19" (D-SUB). I was trying to configure everythin but with no success. I've got ATI V5700 in my laptop. And my ATI CCC allows me to only have one external monitor attached at the time. :( Is there any workaround to this situation? I'd like to have the configuration I've just descripted: laptop + 22" (DVI) + 19" (D-SUB).

    Read the article

  • VMware ESX 3.5 Host Health shown as unknown

    - by dunxd
    I have an ESX 3.5 update 5 cluster of five host servers, all fully patched as of this Friday. Today I noticed that one of the servers has the Hardware Health status as unknown in Virtual Center Infrastructure Client. When I look at the Health Status view under configuration for that host, all the items are status Unknown. The server is exactly the same configuration as the others - same model (HP DL360 G5), memory, NICs etc. I have tried restarting the management service with service mgmt-vmware restart but this has not resolved the issue. Asides from this, I am not seeing any issues with the cluster - however, I hate having a blind spot like this. Any ideas?

    Read the article

  • .NET XPath Returns No Results

    - by Stacy Vicknair
    When using XPath in .NET one of the gotchas to be aware of is that all namespaces must be named, otherwise you’ll end up with no results. Default namespaces that are specified with xmlns alone still need to be recognized in the XPath query! Say I had a bit of XML like what is returned from the QueryService web service in Sharepoint: 1: <?xml version="1.0" encoding="UTF-8"?> 2: <ResponsePacket xmlns="urn:Microsoft.Search.Response"> 3: <Response> 4: <Range> 5: ... 6: <Results> 7: <Document xmlns="urn:Microsoft.Search.Response.Document" relevance="849"> 8: ...   When consuming and navigating this response with XPath it is necessary to name all namespaces. Then those named namespaces must be used in reference to the individual element being requested (i.e. doc:Document). In VB: 1: Dim xdoc = new XPathDocument(reader) 2: Dim nav = xdoc.CreateNavigator() 3: Dim nsMgr = new XmlNamespaceManager(nav.NameTable) 4: nsMgr.AddNamespace("resp", "urn:Microsoft.Search.Response") 5: nsMgr.AddNamespace("doc", "urn:Microsoft.Search.Response.Document") 6:  7: Dim results = nav.Select("//doc:Document", nsMgr)   In C#: 1: var xdoc = new XPathDocument(reader); 2: var nav = xdoc.CreateNavigator(); 3: var nsMgr = new XmlNamespaceManager(nav.NameTable); 4:  5: nsMgr.AddNamespace("resp", "urn:Microsoft.Search.Response"); 6: nsMgr.AddNamespace("doc", "urn:Microsoft.Search.Response.Document"); 7:  8: var results = nav.Select("//doc:Document", nsMgr);

    Read the article

  • Dual displays not working with Xinerama in Ubuntu 12.04

    - by user68489
    I just upgraded from Ubuntu 10.10 to 12.04. I had been using a display configuration just like the one described at http://bitkickers.blogspot.com/2009/08/rotate-just-one-monitor-with.html without any problems under 10.10. I have an nvidia Quadro FX 380 and have upgraded to the latest drivers (295.49). Everything appears to be fine when the system first boots up. However, after logging in, the left screen goes black, and the right screen displays what should be displayed on the left screen. Logging into Ubuntu 2D somewhat improves things. The left screen correctly displays the left portion of the desktop but the right screen contains a duplicate view of the left screen. Turning off Xinerama and enabling TwinView appears to fix the issues but does not allow the right monitor to be rotated. Since the display problems start to occur only after logging on, I thought it might have to do with carryover user configuration from 10.10 but the problems persist even when logging in as a guest. Clicking on System Settings - Displays results in the error message "Could not get screen information - RANDR extension is not present." Any help would be greatly appreciated.

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >