Search Results

Search found 25877 results on 1036 pages for 'information driven'.

Page 785/1036 | < Previous Page | 781 782 783 784 785 786 787 788 789 790 791 792  | Next Page >

  • How to connect through a proxy using Remote Desktop?

    - by scottmarlowe
    So I've got a home server running Windows Server 2003. I use a dual network card setup and Routing and Remote Access to link the internal, private network to the external connection. The external connection hooks directly to my cable modem (so no routers or other devices sitting between). The problem I'm having is that I can't connect remotely from a location outside the house (so connecting to the server's external connection) to the server using either Remote Desktop or VNC. I have enabled both ports in Routing and Remote Access's firewall to allow access, and I have enabled Remote Desktop in Windows Server 2003. The odd thing is that I can access my home server's SVN repository and I can even ping the server's IP. I am using the IP to attempt to connect, though I use a dyndns.com provided name to connect to my SVN repository, so it shouldn't make a difference (I know the IP is getting resolved correctly). Any ideas on where to start diagnosing this one? I haven't seen anything in my server's event log. If any other info is needed, let me know. Thanks. UPDATE: One last piece of information: We use a proxy server at work, which I'm nearly 100% sure is the culprit. I have a workaround--if I connect to our VPN (even though I'm already inside the building) I am able to connect to my home server. This is with VNC. However, is there a way to connect through a proxy using Remote Desktop? ONE MORE UPDATE: Indeed, it was the http proxy I'm sitting behind at work that was causing the issue. An acceptable workaround is to use my VPN connection to bypass the proxy, and I'm in!

    Read the article

  • How to auto-cc a system email account any time a user creates an appointment

    - by Ferdy
    I will not bother explaining my full architecture or reasons for wanting this in order to keep this question short: Is it possible to auto-cc a certain email account any time a Exchange user creates an appointment or meeting in his own calendar? Is it possible using rules? Our Exchange 2007 server is outsourced, I cannot change the configuration or install plugins server-side Preferably, it still should work server-side, because users may use the Outlook client but also Outlook Web Access Is there any other way, perhaps using group policies? My conclusion so far is that the only viable way to accomplish this is to build an Outlook add-on. The problem there is that it will need to be managed for thousands of desktop users and that the add-on will not work when using another client (OWA, mobile). An alternative architecture could be to pull the information from the user's calendar on a scheduled basis. Given that we are talking about a lot of users, scalability is a major issue, this has also been confirmed by Microsoft. Can you confirm that my thinking is correct or do you have any other solutions?

    Read the article

  • Migrating Windows 2003 File Server Cluster to Windows 2008 R2 Standalone?

    - by Tatas
    We have a situation where we have an aging Windows 2003 File Server Cluster that we'd like to move to a standalone Windows Server 2008 R2 VM that resides in our Hyper-V R2 installation. We see no need to keep the Clustering as Hyper-V is now providing our Failover/Redundancy. Usually, in a standalone file server migration we migrate the data, preserving NTFS permissions and then export the sharing permissions from the registry and import them on the new server. This does not appear possible in this instance, as the 2003 cluster stores the sharing permissions quite differently. My question is, how would one perform this type of migration? Is it even possible? My current lead is the File Server Migration Toolkit, however I can find no information on the net about migrating from cluster to standalone, only the opposite. Please help. UPDATE: We ended up getting the data copied over (permissions intact), but had to recreate the shares manually by hand. It was a bit of a pain but it did in the end work out.

    Read the article

  • How to restrict zone transfers to specific authorized servers only

    - by JonoB
    I recently failed a PCI compliance scan because of the following: This DNS server allows unrestricted zone transfers. Attackers may be able to use this information to gain knowledge on the structure of your networks to aid in device discovery prior to an actual attack. And the suggested solution is as follows: Reconfigure this DNS server to restrict zone transfers to specific authorized servers only. I am running a dedicated Linux Centos server. My understanding is that I have to edit the /etc/named.conf file, which I have done and the the relevant part is as follows: options { acl "trusted" { 127.0.0.1; xxx.xxx.xxx.001; //this is one of the server's ip's xxx.xxx.xxx.002; //this is another server's ip }; allow-recursion { trusted; }; allow-notify { trusted; }; allow-transfer { trusted; }; }; I then restarted the named service /etc/rc.d/init.d/named restart and requested a re-scan, which failed again for the same reason. Am I missing something obvious here?

    Read the article

  • I get a Segmentation fault when doing apt-get util-linux

    - by Adam
    I've found that a lot of upgrade commands and Apache on my system are failing with Segmentation faults. I don't know if this is the main one, but a lot of packages depend on util-linux: root@myUbuntuHardyHeronServer:~# apt-get install util-linux Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: util-linux 1 upgraded, 0 newly installed, 0 to remove and 72 not upgraded. 20 not fully installed or removed. Need to get 0B/441kB of archives. After this operation, 0B of additional disk space will be used. (Reading database ... 20547 files and directories currently installed.) Preparing to replace util-linux 2.13.1-5ubuntu2 (using .../util-linux_2.13.1-5ub untu3.1_i386.deb) ... Unpacking replacement util-linux ... Segmentation fault dpkg: warning - old post-removal script returned error exit status 139 dpkg - trying script from the new package instead ... Segmentation fault dpkg: error processing /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386 .deb (--unpack): subprocess new post-removal script returned error exit status 139 Segmentation fault dpkg: error while cleaning up: subprocess post-removal script returned error exit status 139 Errors were encountered while processing: /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Linux NIC Bonding Issue (CentOS 4 / RHEL 3)

    - by jinanwow
    I am having an issue with bonding NICs on CentOS 4. It appears the bonding driver does work, but it is stuck in round-robin mode and I am trying to get to active-backup. The current config is: ifcfg-bond0 DEVICE=bond0 IPADDR=192.168.204.18 NETMASK=255.255.255.0 ONBOOT=yes BOOTPROTO=none USERCTL=no TYPE=Bonding BONDING_OPTS="mode=1 miimon=100" ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes TYPE=Ethernet MASTER=bond0 SLAVE=yes ifcfg-eth3 DEVICE=eth3 ONBOOT=yes BOOTPROTO=none TYPE=Ethernet MASTER=bond0 SLAVE=yes cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v2.6.3-rh (June 8, 2005) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:17:a4:8f:94:b1 Slave Interface: eth3 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:1b:21:56:b8:69 cat /etc/modprobe.conf alias eth0 tg3 alias eth1 tg3 alias eth3 e1000 alias eth2 e1000 alias bond0 bonding options bond0 mode=1 miimon=100 I have tried moving the bonding information out of the ifcfg-bond0 into the modprobe configuration file. It seems that it is stuck in RR and I am trying to get it into the Active-backup (mode 1) state. Any ideas what would be causing this issue?

    Read the article

  • How to give wife emergency access to logins, passwords, etc.?

    - by Torben Gundtofte-Bruun
    I'm the digital guru in my household. My wife is good with email and forum websites but she trusts me with all our important digital stuff -- such as online banking and other things that require passwords, but also family photos and the plethora of other digital things in a modern home. We discuss relevant actions but it's always me that executes the actions. If I should get "hit by a bus" then my wife would be thoroughly stranded -- she would have no idea what digital stuff is where on our computer, how to access it, what online accounts we have, and their login credentials are. It would also leave my many public appearances (personal websites, email accounts, social networks, etc.) unresolved. To complicate things, I'm one of those people who don't use password as my password everywhere; I use a mix of SuperGenPass and LastPass, and also two-factor authentication whenever possible. I don't have much hope that she would find her way through a written explanation of all that in a stressful situation. I could just tell her that she should ask my tech-savvy twin brother and then entrust him with my LastPass master passphrase. I feel that would have a high chance of success, but it's inelegant and leaves my wife without control of the information. How can I ensure that my wife has access to my digital remains?

    Read the article

  • Dell Inspiron 1564 overheating but fan not switching on, how to diagnose?

    - by Smugrik
    I've got a Dell Inspiron 1564 laptop that is about one and a half years old. Since about a week, the laptop started to overheat, causing it to switch off unexpectedly... The cpu fan is working erratically, it can start to spin for a while, doing its job and cooling down the cpu before it stops, but then the temperature goes up, and the fan doesn't reacts, once the temperature reaches a critical point (over 85 celsius, checked with speedfan...), the laptop switches off... I already cleaned the vents and fan from dust, to no avail, and it was actually quite clean anyway. Drivers and bios are up-to-date, no crapware was ever installed on this machine. I don't know how to diagnose the problem, could it be the temperature sensors that sends wrong information, so the fan doesn't reacts? but then I believe the computer wouldn't detect the overheat and stop... Is there a way I can pin point the problem? Maybe some low-level diagnostic tools to check functionality of sensors and fans??? The warranty is already over so any suggestion would be welcome. Thanks!!

    Read the article

  • Apache no longer starts at Windows boot up

    - by w3d
    I have Apache installed as part of XAMPP - local test server. It is configured as a Windows (XP) Service. Startup type is "Automatic". For a long time now it has always started when Windows boots up, but recently this has stopped happening. I now need to start it manually via the XAMPP Control Panel - at which point it appears to start up perfectly OK. The only recent updates to the machine (that I recall) are Windows Updates - none of which appear to have "known issues" that relate to this. And updates to Google Chrome. Any ideas what could prevent Apache from starting automatically at Windows (XP) boot up? EDIT#1 There are 2 related Errors in my system event log regarding the Service Control Manager: Timeout (30000 milliseconds) waiting for the Apache2.2 service to connect. The Apache2.2 service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion. When I manually start the Apache server after boot up there are 2 "information" events stating that it was "sent a start control" and that it "entered the running state". Although I notice it appears to take 19 seconds between the start control being sent and entering a running state - according to the event log. So, maybe 30 seconds during boot up isn't long enough (anymore) for Apache to start??

    Read the article

  • cisco 2900xl - SNMP - Get mac address of device connected to an interface

    - by ankit
    Hello all, Basically what i want to do is to find out what is the mac address of a device plugged in to an interface on the switch (FastEthernet0/1 for example) reading through the switch documentaion i found out that i can configure snmp trap on it to make it notify of any new mac address the switch detects by using the command snmp-server enable traps mac-notifiction but for some reason my switch does not support this feature. the only options i see are CORE_SWITCH(config)#snmp-server enable traps ? c2900 Enable SNMP c2900 traps cluster Enable Cluster traps config Enable SNMP config traps entity Enable SNMP entity traps hsrp Enable SNMP HSRP traps snmp Enable SNMP traps vlan-membership Enable VLAN Membership traps vtp Enable SNMP VTP traps <cr> so the other way would be for me to run a cronjon on my gateway to poll the switch periodically using snmp to get new mac addresses i have looked everywhere but cant seem to find the OID that would provide me this information. any help i can get would me very much appreciated ! here's the output from "show version" on my switch Cisco Internetwork Operating System Software IOS (tm) C2900XL Software (C2900XL-C3H2S-M), Version 12.0(5.4)WC(1), MAINTENANCE INTERIM SOFTWARE Copyright (c) 1986-2001 by cisco Systems, Inc. Compiled Tue 10-Jul-01 11:52 by devgoyal Image text-base: 0x00003000, data-base: 0x00333CD8 ROM: Bootstrap program is C2900XL boot loader CORE_SWITCH uptime is 1 hour, 24 minutes System returned to ROM by power-on System image file is "flash:c2900XL-c3h2s-mz.120-5.4.WC.1.bin" cisco WS-C2912-XL (PowerPC403GA) processor (revision 0x11) with 8192K/1024K bytes of memory. Processor board ID FAB0409X1WS, with hardware revision 0x01 Last reset from power-on Processor is running Enterprise Edition Software Cluster command switch capable Cluster member switch capable 12 FastEthernet/IEEE 802.3 interface(s) 32K bytes of flash-simulated non-volatile configuration memory. Base ethernet MAC Address: 00:01:42:D0:67:00 Motherboard assembly number: 73-3397-08 Power supply part number: 34-0834-01 Motherboard serial number: FAB040843G4 Power supply serial number: DAB05030HR8 Model revision number: A0 Motherboard revision number: C0 Model number: WS-C2912-XL-EN System serial number: FAB0409X1WS Configuration register is 0xF thanks, -ankit

    Read the article

  • periodically unable to play media

    - by avorum
    So I don't know if this is right place to ask this at all but I've gotten good help here before so I thought I'd ask. For the last year or so periodically my computer would start refusing to play media. In browser players would say they were playing but they weren't. No audio and the video wasn't moving forward although it would show the first frame of the video to be shown. iTunes would act similarly, thinking it was playing without actually playing any music. This persists across browsers, various application categories, etcetera. It can often be fixed by rebooting but it is only a short term solution. Does anyone know of anything that might cause this erratic behavior? I'm using Windows 7 64bit. If additional information would help please ask. Alternatively, if this isn't the right site for this I would greatly appreciate some direction to a site better suited to my question. Thanks in advance for any help.

    Read the article

  • IPTables: NAT multiple IPs to one public IP

    - by Kaemmelot
    I'm looking for a way how to nat 2 or more inner IPs (in my case xen doms) to one outer IP. I tried to use iptables -t nat -A PREROUTING -d 123.123.123.123 -j DNAT --to 1.2.3.4 --to 1.2.3.7 iptables -t nat -A POSTROUTING -s 1.2.3.4 -j SNAT --to 123.123.123.123 iptables -t nat -A POSTROUTING -s 1.2.3.7 -j SNAT --to 123.123.123.123 And got an error: iptables v1.4.14: DNAT: Multiple --to-destination not supported Try `iptables -h' or 'iptables --help' for more information. I found this in the manpage: Later Kernels (= 2.6.11-rc1) don't have the ability to NAT to multiple ranges anymore. So my question is: Why is it not possible anymore and is there a workaround? Maybe I should use an other method I don't know yet? EDIT: The idea is to use the system like a router, so I have one address but multiple users behind. The problem is I don't know which connection reffers to a user (for example 1.2.3.4). But I know, they all have different ports open for incomming traffic. So my solution (for DNAT) would be to nat all incoming connections to all users and filter all unused ports, so the connection goes to one single user. For outgoing traffic I would use iptables -A FORWARD -i eth0 -d 1.2.3.4 -m state --state ESTABLISHED,RELATED -j ACCEPT

    Read the article

  • Programmatically add/delete users in Exchange

    - by Terry Gamble
    I've got the following set up: ASP.Net site that allows my internal employees to add in new hire information (no secure data, just stuff like name/address/phone) and when they submit this it goes into a database (SQL). Every few minutes a service runs that checks the database and if there are new entries it will add them into Exchange. The issue is I'm not happy with the way the service is doing things, (It's not putting address, etc in it). As I don't have the source code this I'm thinking of recreating it. My issue though is finding a starting point even. I know I'll have to create the scripts through code where the data is retrieved from SQL : Joe Smith 123 Main Street Nowhere, USA 19999 And put that into a powershell cmdlet (not sure exactly the syntax but I can get that figured out unless someone already has it) where the user is created in the Active Directory as a normal user and the mailbox is created simultaneously. From there I just need to fill out fields in Active Directory with the person's address, etc. Finally a deletion routine for when we terminate someone, however I'm sure that it will simply be a cmdlet that is easily shelled out to much like the initial one is, once I can figure out how to start that... Anyone have some good reference points or have already done it and can share?

    Read the article

  • SharePoint Records Center Submitted E-mail Records not picked up

    - by Kenneth Verburg
    We have set up a new SharePoint 2007 site with a Records Repository. We're using Exchange 2007 Managed Folders to route e-mails to this repository based on the 'label' attached to the e-mail as set in the Exchange 2007 journaling options. E-mails added to a Managed Folder get sent to SharePoint, they end up in the "Submitted E-mail Records" list of the Records Repository. That's according to plan, but the e-mails are not routed to the respective document library as defined by the label. Instead an error appears in the event viewer for every e-mail listed in the Submitted E-mail Records list, on every interval of the records repository schedule (set to every two minutes for testing purposes): Value cannot be null, parameter name: g. Sending a document from the SharePoint site iself to the Records Repository via the Send To... link works fine, but e-mails get stuck in the list... We have set Document Libraries in the Respository with and without content types (with matching names with the Label and the Record Routing rule set). Any ideas what could be wrong? This is in the event log: Every two minutes the following error appears in the Application Log: Source: Office SharePoint Server Category: Records Center Type: Error Event ID: 4975 User: N/A Computer: SPS2007 Description: Value cannot be null. Parameter name: g For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • IIS running but not serving content

    - by Kyle
    I have an internal dev server running Windows 2k8 R2 with the Web and FTP Server roles set up which won't serve any content at all. Trying to connect from another host via telnet yields 'connection failed': c:\>telnet devserver 80 Connecting To devserver...Could not open connection to the host, on port 80: Conn ect failed Using netstat -an | find "80" on the dev server returns no connections on port 80 (a few on 1801, etc) tcpview confirms this, listing no open connections on port 80. The following services related to the Web role are running: World Wide Web Publishing Service Application Host Helper Service Microsoft FTP Service (ftp connections to port 21 are granted) Windows Process Activation Service The default website bindings are: Type Host Name Port IP Address Binding Information http 80 * net.tcp 808:* net.pipe * net.msmq localhost msmq.formatname localhost When setting up a new application under the default site, the test function passes both connection/authorisation only if the 'connect as' user is local admin, otherwise the test errors with 'invalid application path'. At no point is the W3SVC service PID bound to port 80 (it is running and bound to 21 for ftp). There are no W3SVC log directory at c:\inetpub\logs\LogFiles\ (only FTPSVC2), and no HTTPERR directory at c:\windows\system32\ or c:\windows\system32\logfiles\. There do not appear to be any related errors in the event logs. I'd really appreciate any thoughts on be a good place dig into what's (not) going on here!

    Read the article

  • How to edit a read-only document in LibreOffice?

    - by TestUser16418
    I need to fill a form (which I received in .doc format and saved as .odt). The file is read-only except for the fields where I can enter the information. Unfortunately, with the fields filled it doesn't fit on one page, and I need to edit it so I can print and submit it. With LibreOffice beta 3, I could edit anything outside of the fields, and the fonts were slightly smaller, so it fit on the page even with the fields filled. Today I upgraded LibreOffice, and when I opened to edit a field where I had a mistake, it no longer fits on the page, and I can't edit it. When I opened the properties it says that the document is NOT read-only, but it is. When I try to delete text it tells me that I can't edit the read-only content. Can anyone give me some advice, because I've been trying to print my form for 2 hours already. I tried AbiWord and KWord, but both are missing elements from the page (though the forms fit). I can also edit the margins (Format - Page is dimmed, but when I begin to edit a field it's no longer dimmed)med

    Read the article

  • PHP scripts randomly becoming really slow to respond - Database lockup?

    - by webnoob
    Hi All, I wasn't sure whether to post this here or on stackoverflow, so apologies if its in the wrong place. I have about 7 php scripts running on a centOS VPS. Each of these scripts contacts a game server and processes the logs, with the logs it either does some database queries or sends info back to the game server. I am having an issue where some of the scripts will randomly become REALLY slow to respond and I don't know where to start with my debugging. Each script connects to its own database schema but on the same MySQL server. Each script will do about 4 inserts per second and twice as many select statements on their respective databases. I thought a database lockup may cause the issue but some console messages that are read from the database are sent to the game servers console without issue every 30 seconds, even when the script is slow to responding to other commands. Non of the scripts are using a lot of memory or CPU power. About 0.1% each. I know this information is really vague but I don't know linux very well at all (in fact, top is about my limit) and I really don't know where to start debugging this. Thanks.

    Read the article

  • Debian Squeeze vzquota

    - by benjamin
    Hello, Apparently, I got Debian Squeeze (Debian 6) to work on a VPS using debootstrap and chroot as described here. Subsequent installation of the harden, exim4, mysql-server packages failed partially. Relevant information: insserv: warning: script 'S10vzquota' missing LSB tags and overrides insserv: warning: script is corrupt or invalid: /etc/init.d/../rc6.d/S00vzreboot insserv: warning: script 'vzquota' missing LSB tags and overrides insserv: There is a loop between service vzquota and stop-bootlogd if started insserv: loop involving service stop-bootlogd at depth 2 insserv: loop involving service vzquota at depth 1 insserv: loop involving service rsyslog at depth 1 insserv: Starting vzquota depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting vzquota depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: There is a loop between service vzquota and stop-bootlogd if started insserv: Starting vzquota depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: Starting vzquota depends on stop-bootlogd and therefore on system facility `$all' which can not be true! insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing exim4-base (--configure): subprocess installed post-installation script returned error exit status 1 Any suggestions? Keywords: vzquota debian squeeze installation vps, virtual private server.

    Read the article

  • Redis 2.0.3 would not let go of deleted appendonly.aof file after BGREWRITEAOF

    - by Alexander Gladysh
    Ubuntu 10.04.2, Redis 2.0.3 (more details at the end of the question). My AOF file for Redis is getting too large, to the point where it soon would threaten to take whole free disk space on my small-HDD VPS box: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 32G 24G 6.7G 78% / $ ls -la total 3866688 drwxr-xr-x 2 redis redis 4096 2011-03-02 00:11 . drwxr-xr-x 29 root root 4096 2011-01-24 15:58 .. -rw-r----- 1 redis redis 3923246988 2011-03-02 00:14 appendonly.aof -rw-rw---- 1 redis redis 32356467 2011-03-02 00:11 dump.rdb When I run BGREWRITEAOF, the AOF file shrinks, but disk space is not freed: $ ls -la total 95440 drwxr-xr-x 2 redis redis 4096 2011-03-02 00:17 . drwxr-xr-x 29 root root 4096 2011-01-24 15:58 .. -rw-rw---- 1 redis redis 65137639 2011-03-02 00:17 appendonly.aof -rw-rw---- 1 redis redis 32476167 2011-03-02 00:17 dump.rdb $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 32G 24G 6.7G 78% / Sure enough, Redis is still holding the deleted file: $ sudo lsof -p6916 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ... redis-ser 6916 redis 7r REG 202,0 3923957317 918129 /var/lib/redis/appendonly.aof (deleted) ... redis-ser 6916 redis 10w REG 202,0 66952615 917507 /var/lib/redis/appendonly.aof ... How can I workaround this issue? I can restart Redis this time, but I would really like to avoid doing this on a regular basis. Note that I can not upgrade to 2.2 (upgrade to 2.0.4 is feasible though). More information on my system: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.2 LTS Release: 10.04 Codename: lucid $ uname -a Linux my.box 2.6.32.16-linode28 #1 SMP Sun Jul 25 21:32:42 UTC 2010 i686 GNU/Linux $ redis-cli info redis_version:2.0.3 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:32 multiplexing_api:epoll process_id:6916 uptime_in_seconds:632728 uptime_in_days:7 connected_clients:2 connected_slaves:0 blocked_clients:0 used_memory:65714632 used_memory_human:62.67M changes_since_last_save:8398 bgsave_in_progress:0 last_save_time:1299014574 bgrewriteaof_in_progress:0 total_connections_received:17 total_commands_processed:55748609 expired_keys:0 hash_max_zipmap_entries:64 hash_max_zipmap_value:512 pubsub_channels:0 pubsub_patterns:0 vm_enabled:0 role:master db0:keys=1,expires=0 db1:keys=18,expires=0

    Read the article

  • Intel DriverInstall issue on HP ProLiant server DL580G5

    - by TomTom
    Given: A HP ProLiant server to be used for Hyper-V server (R 2) Problem: The server has a HP NC364T quad port card, which actually is an Intel 1000 Pro adapter. We are stuck getting this adapter properly working. We need to get it working with: 2 trunks of 2 ports each one of them fully vlan "aware" (without filtering anything - so that Hyper-V can do the VLAN filtering). So far we did: Install the server Install latest PSP. Problem here - tool does not work to set up teaming. We tried to install the latest ProWin pack from Intel - the drivers there are a LOT more current (26th of Marrch 2010, 9.13.41.0). The intalled driver (Microsoft) is ancient (march 2009, 9.13.4.10) A driver update fails both on command line (pnputil) as well as the PortLock device manager executable. No informatoin is provided. the ANS toolset can not create teams (fails with error - again, no reason provided), I know of no logs and the event log has no information either. Questions: What are the best drivers to use? How do we actually install them? How can we then set up teaming and how do we set up the VLAN behavior wanted?

    Read the article

  • How do you install .net4 on a Server 2008 r2 machine through psremoting in powershell?

    - by Jake
    I need to write a script that installs .net 4 remotely using powershell to a group of Server 2008 R2 machines. I based my script off of http://social.technet.microsoft.com/Forums/en-US/winserverpowershell/thread/3045eb24-7739-4695-ae94-5aa7052119fd/. enter-pssession -computername localhost $arglist = "/q /norestart /log C:\Users\tempuser\Desktop\dotnetfx4" $filepath = "C:\Users\tempuser\Desktop\dotNetFx40_Full_setup.exe" Start-Process -FilePath $filepath -ArgumentList $arglist -Wait -PassThru After running the command I would get the following log errors (running the same lines locally would install .net without error): Action: Downloading Item Failed to CreateJob : hr= 0x80200014 Action: Performing actions on all Items Action: Performing Action on Exe at C:\Users\tempuser\Desktop\dotnetfx4\SetupUtility.exe Exe (C:\Users\tempuser\Desktop\dotnetfx4\SetupUtility.exe) succeeded. Exe Log File: dd_SetupUtility.txt Action complete Action: ServiceControl - Stop clr_optimization_v2.0.50727_32 ServiceControl operation succeeded! Action complete Action: ServiceControl - Stop clr_optimization_v2.0.50727_64 ServiceControl operation succeeded! Action complete Action: Performing Action on Exe at C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\Windows6.1-KB958488-v6001-x64.msu Exe (C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\Windows6.1-KB958488-v6001-x64.msu) failed with 0x5 - Access is denied. . PerformOperation on exe returned exit code 5 (translates to HRESULT = 0x5) Action complete OnFailureBehavior for this item is to Rollback. Action: Performing actions on all Items Action complete Action complete Action: Downloading http://go.microsoft.com/fwlink/?LinkId=164184&clcid=0x409 using WinHttp WinHttpDetectAutoProxyConfigUrl failed with error: 12180 Unable to retrieve Proxy information although WinHttpGetIEProxyConfigForCurrentUser called succeeded Action complete C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\TMPF279.tmp.exe: Verifying signature for netfx_Core.mzz C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\TMPF279.tmp.exe Signature verified successfully for netfx_Core.mzz Action complete Decompression completed with code: 16389 Decompression of payload failed: C:\Users\tempuser\AppData\Local\Temp\Microsoft .NET Framework 4 Setup_4.0.30319\netfx_Core.mzz Action complete Final Result: Installation failed with error code: (0x80074005) (Elapsed time: 0 00:00:28). Is there some security setting or perhaps something else I've missed?

    Read the article

  • ext4 filesystem corruption -- maybe hardware error?

    - by pts
    I'm getting these errors in dmesg after about half an hour after I turn on the computer: [ 1355.677957] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318420: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251700offset=0(0), inode=1802725748, rec_len=179136, name_len=32 [ 1355.677973] Aborting journal on device sda2-8. [ 1355.678101] EXT4-fs (sda2): Remounting filesystem read-only [ 1355.690144] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318416: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251699offset=0(0), inode=2194783952, rec_len=53280, name_len=152 [ 1356.864720] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1312795: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251176offset=1460(13748), inode=1432317541, rec_len=208208, name_len=119 /dev/sda is an SSD, and it's using the noop scheduler. /etc/fstab entry: UUID=acb4eefa-48ff-4ee1-bb5f-2dccce7d011f / ext4 errors=remount-ro,noatime,discard,user_xattr 0 1 System information: $ cat /proc/mounts | grep /dev/sd /dev/sda1 /boot ext2 rw,noatime,errors=continue 0 0 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.3 LTS" $ uname -a Linux leetpad 2.6.35-30-generic-pae #61~lucid1-Ubuntu SMP Thu Oct 13 21:14:29 UTC 2011 i686 GNU/Linux I've run memtest for 7 hours, it didn't found any memory errors. Any obvious ideas what can go wrong in this case? The most reasonable thing I can imagine is that the SSD is silently dropping some write requests, which eventually leads to an EXT4 filesystem inconsistency (but no disk I/O errors). How can this happen? Is there a relevant configuration option I should ensure to be set correctly? What tools should I use to diagnose the hardware failures? Would it be possible to diagnose the SSD failure without overwriting data?

    Read the article

  • package issue with ubuntu 10.10 and passenger requirements

    - by user368937
    I'm trying to get Passenger working with Ubuntu 10.10 and I'm running into a problem. It seems that the passenger installer is not recognizing the virtual package. I'm getting this error: Code: passenger-install-apache2-module ... * OpenSSL support for Ruby... not found ... And then it says, run this: * To install OpenSSL support for Ruby: Please run apt-get install libopenssl-ruby as root. When I run the above command, it refers to the libruby package: sudo apt-get install libopenssl-ruby Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'libruby' instead of 'libopenssl-ruby' libruby is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 43 not upgraded. When I look at the details for libruby, it says it provides libopenssl-ruby: Code: Provides: libbigdecimal-ruby, libcurses-ruby, libdbm-ruby, libdl-ruby, libdrb-ruby, liberb-ruby, libgdbm-ruby, libiconv-ruby, libopenssl-ruby, libpty-ruby, libracc-runtime-ruby, libreadline-ruby, librexml-ruby, libsdbm-ruby, libstrscan-ruby, libsyslog-ruby, libtest-unit-ruby, libwebrick-ruby, libxmlrpc-ruby, libyaml-ruby, libzlib-ruby And when I rerun the passenger installer, it gives the same error: Code: passenger-install-apache2-module ... * OpenSSL support for Ruby... not found ... Let me know if you need more info. How do I fix this?

    Read the article

  • OS X Client & Ubuntu Server - Best way for client to access files on server?

    - by Camsoft
    I've got a local development web server running Ubuntu. I also have an iMac running OS X 10.6 which I use a client and is my development machine. I'm currently have Samba server installed on my Ubuntu server. I have shares setup for all the website directories. I then use my Mac and Coda to edit the files via their shares. This generally works really well but I noticed that my Mac was writing loads of resource fork ._filename files everywhere. I found out the following about the files: These files are created on volumes that don't natively support full HFS file characteristics (e.g. ufs volumes, Windows fileshares, etc). When a Mac file is copied to such a volume, its data fork is stored under the file's regular name, and the additional HFS information (resource fork, type & creator codes, etc) is stored in a second file (in AppleDouble format), with a name that starts with "._". (These files are, of course, invisible as far as OS-X is concerned, but not to other OS's; this can sometimes be annoying...) Does anyone know of a way of sharing files between a Mac client and a Linux server that is most compantable between the two operation systems? Ideally it needs to support the HFS filesystem so that the resource forks are not created and it also needs to support the permissions between server and client.

    Read the article

  • IIS 7 URL Rewrite to GeoServer running on Apache

    - by Maxim Zaslavsky
    I'm building a mapping application based on OpenLayers that uses GeoServer to serve up mapping data. The problem I'm having is that besides the map images I'm requesting through WMS, I'm using jQuery AJAX to get information from GeoServer. As GeoServer is running on a different port, my requests are being blocked due to cross-site scripting security policies in JavaScript. As a Java application, GeoServer runs on Apache on port 8080, while my IIS instance is running on port 80. Instead of building a proxy, I've decided to use URL Rewriting in IIS7 to fix this problem. I'm following this guide, but it's still not working. Here are my URL Rewrite rule settings: Matches URL: (.*) Condition: {HTTP_URL} matching /geoserver Action: rewrite to http://localhost:8080/{R:1}, appending query string When I request http://localhost/geoserver/wms?QUERY_LAYERS=SanDiego:FWSA_sandiego&LAYERS=SanDiego:FWSA_sandiego&SERVICE=WMS&VERSION=1.1.1&FEATURE_COUNT=20&REQUEST=GetFeatureInfo&EXCEPTIONS=application/vnd.ogc.se_xml&BBOX=-13009123.590156,3862057.2905992,-13006066.109025,3865114.7717302&INFO_FORMAT=text/html&x=20&y=20&width=40&height=40&srs=EPSG:900913, however, all I get is a 404, although the same request on port 8080 returns the proper result. What am I doing wrong? Thanks in advance.

    Read the article

< Previous Page | 781 782 783 784 785 786 787 788 789 790 791 792  | Next Page >