Search Results

Search found 18244 results on 730 pages for 'controller action'.

Page 564/730 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • IIS 7.0 - responses throttled to 500ms blocks?

    - by Julia Hayward
    Scenario: ASP.NET MVC wep app sitting on my local machine (Vista Ultimate, IIS 7.0), nothing going on except one user (me) logged in and viewing an index page. The page includes 9 dynamic images drawn from the underlying DB and returned from a controller action. I have got the actual processing time for these images down to 15ms each. Turn on Firebug and watch the page load. What I see is 9 requests for images firing off together – no surprise – but four come back to me almost immediately; two more after 0.5s; another after 1s; then at 1.5s and 2s. Logging on the server side suggests the individual responses are still only taking 15ms. So it appears IIS is queueing things up into 500ms chunks. (Repeating the experiment produces different results, but each time the images return in similar blocks – you might get three in the first group, then three at 0.5s, two at 1s etc, for example – and it’s always at 500ms intervals, not anything else.) It’s also repeatable cross-browser, and it’s not repeatable with other forms of content. I haven't found any particular mention of this problem out there, so I'm sort of assuming it's not an IIS bug, so is it: i) IIS on desktop OSs deliberately does it, to make you use server OSs in production? ii) There is some magical setting that has eluded me for as long as I’ve known IIS? iii) Something peculiar to MVC or SQL Server 2008? or something else?

    Read the article

  • Windows 7 - system error 5 problem

    - by ianhobson
    My wife has just had a new computer for Christmas (with an upgrade from VISTA to Windows 7), and has joined the home network. We are using a mix of WindowsXP and Ubuntu boxes linked via a switch. We are all in the same workgroup. (No domain). Internet access, DHCP, and DNS server is an SME server that thinks it is domain controller (although we are not using a domain). I need to run a script to back up my wife's machine (venus). In the past the script creates a share on a machine with lots of space (leda), and then executes the line. PSEXEC \\venus -u admin -p adminpassword -c -f d:\Progs\snapshot.exe C: \\leda\Venus\C-drive.SNA With the wife's old XP machine, this would run the sysinternals utility, copy shapshot,exe to her machine and run it, which would then back up her C: drive to the share on leda. I cannot get this to work with Windows 7, nor can I link through to the C$ share on her machine. This gives me a permissions error (system error 5). The admin account is a full admin account. And yes - I do know the password. The ordinary shares on her machine work fine! I guess I'm missing something that Microsoft have built into Windows 7 - but what? The machine is running Windows 7 business, with windows firewall, AVG anti virus, and all the crap-ware you get with a new PC removed. Thanks

    Read the article

  • Windows authentication to SQL Server via IIS and PHP

    - by Jeff
    We're running a PHP 5.4 application on Server 2008 R2. We would like to connect to a SQL Server 2008 database, on a separate server, using Windows authentication (must be Windows authentication--the DB admins won't let us connect any other way). I have downloaded the SQL Server drivers for PHP and installed them. IIS is configured for Windows authentication, and anonymous authentication has been disabled. $_SERVER['AUTH_USER'] reports our currently logged on Windows account. In php.ini, we have set fastcgi.impersonate = 1. When we setup a connection using the following code from Microsoft: $serverName = "sqlserver\sqlserver"; $connectionInfo = array( "Database"=>"some_db"); /* Connect using Windows Authentication. */ $conn = sqlsrv_connect( $serverName, $connectionInfo); if( $conn === false ) { echo "Unable to connect.</br>"; die( print_r( sqlsrv_errors(), true)); } We are presented with the following error message: Unable to connect. Array ( [0] => Array ( [0] => 28000 [SQLSTATE] => 28000 [1] => 18456 [code] => 18456 [2] => [Microsoft][SQL Server Native Client 11.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. [message] => [Microsoft][SQL Server Native Client 11.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. ) Is it possible to connect to SQL Server 2008 via PHP using Windows authentication? Are there any additional required settings we need to make on IIS, SQL Server, or any other component (like a domain controller)?

    Read the article

  • Trouble getting latest version of Git

    - by TheMethod
    I am using Ubuntu 10.04 LTS. I'm looking at using git as source control for personal projects and Github as a remote repository. I was having trouble pushing a commit to my remote github repo getting the following error message: The requested URL returned error: 403 while accessing https://github.com/Jstall/helloworld.git/info/refs When I did some digging I found that the problem could be me not having the latest version of Git. When I did a --version I found that I have version 1.7.0.4 locally. So I tried to update git using: sudo apt-get install git but get the following error: Reading package lists... Done Building dependency tree Reading state information... Done Package git is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package git has no installation candidate I've tried running: sudo apt-get update and trying again but it didn't seem to make a difference. I'm not sure if it's relevant but I'm also getting a couple of 404's when I run update: Err http://wine.budgetdedicated.com edgy/main Packages 404 Not Found Fetched 4,117B in 0s (5,142B/s) W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/edgy/universe/binary-i386/Packages.gz 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://wine.budgetdedicated.com/apt/dists/edgy/main/binary-i386/Packages.gz 404 Not Found I'm not sure when I should try next. Could anyone suggest a course of action to get this resolved? Any advice would be appreciated. Thanks much!

    Read the article

  • KVM error with device pass through

    - by javano
    I am running the following command booting a Debian live CD passing a host PCI device to the guest as a test and KVM errors out; kvm -m 512 -boot c -net none -hda /media/AA502592502565F3/debian.iso -device pci-assign,host=07:00.0 PCI region 1 at address 0xf7920000 has size 0x80, which is not a multiple of 4K. You might experience some performance hit due to that. No IOMMU found. Unable to assign device "(null)" kvm: -device pci-assign,host=07:00.0: Device 'pci-assign' could not be initialized lspci | grep 07 07:00.0 Ethernet controller: 3Com Corporation 3c905C-TX/TX-M [Tornado] (rev 74) I shoved an old spare NIC into my motherboard to test PCI pass through. I have searched the Internet with Goolge and found that errors relating to "No IOMMU found" often mean the PCI device is not supported by KVM. Does KVM have to support the device being "passed-through"? I though the point was to pass the device through and let the guest worry about it? Ultimately I want to pass-through a PCI random number generator, is this not going to be possible with KVM? Thank you.

    Read the article

  • How to cache authentication in Linux using PAM/Kerberos authentication (for CVS)?

    - by Calonthar
    We have several Linux servers that authenticate Linux user passwords on our Windows Active Directory Server using PAM and Kerberos 5. The Linux distro we use is CentOS 6. On one system, we have several Version Control Systems like CVS and Subversion, both of which authenticate users throug PAM, such that users can use their normal Unix resp. Windows AD accounts. Since we started using Kerberos for password authentication, we experienced that CVS on a client machine is often much slower in establishing a connection. CVS authenticates the user on every request (eg. cvs diff, log, update...). Is is possible to cache the credentials that kerberos uses, sucht that is does not need to ask the Windows AD server every time a user executes a cvs action? Our PAM config /etc/pam.d/system-auth looks like the following: auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_krb5.so use_first_pass auth required pam_deny.so account required pam_unix.so broken_shadow account sufficient pam_succeed_if.so uid < 500 quiet account [default=bad success=ok user_unknown=ignore] pam_krb5.so account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password sufficient pam_krb5.so use_authtok password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_krb5.so

    Read the article

  • Sonicwall NSA 240, Configured for LAN and DMZ, X0 and X2 on same switch - ping issues

    - by Klaptrap
    Our Sonicwall vendor supplied and networked the NSA240 when we required a DMZ in our infrastructure. This was configured and appeared correct although VPN users periodically dropped DNS and Terminal Services. The vendor could not resolve and so the call was escalated to Sonicwall. The Sonicwall support engineer took a look and concluded that the X0 (LAN) and X2 (DMZ) intefaces were cabled to the same switch and so this is the issue. What he observed is a ping request to the LAN Domain Controller, from a connected VPN user, is forwarded (x0) from the VPN client IP to the DC IP but the ping response from the DC IP to the VPN client IP is on X2, a copy of the log is detailed below:- 02/02/2011 10:47:49.272 X1*(hc) X0 192.168.1.245 192.168.1.8 IP ICMP -- FORWARDED 02/02/2011 10:47:49.272 -- X0* 192.168.1.245 192.168.1.8 IP ICMP -- FORWARDED 02/02/2011 10:47:49.272 X2*(i) -- 192.168.1.8 192.168.1.245 IP ICMP -- Received X0 - LAN X1 - WAN X2 - DMZ The Sonicwall engineer concluded that we either need a seperate switch for X2 or we use a VLAN switch for both. I am the companies software engineer and we have yet to have heard back from the vendor, so I am lost at sea at the moment. Do we need to buy this additional equipment or is there another configuration on the NSA240 we can use?

    Read the article

  • Trying to run a codeigniter app on custom php

    - by hamstar
    I have a CodeIgniter app that I deployed to a server with php 5.2 and my dev box has 5.3, and some stuff doesn't work anymore. I didn't want to upgrade php and risk the other app on the server having issues. Anyway I compiled a custom PHP and added the following to a single .conf file in /etc/httpd/conf.d/zcid.conf with all the other conf files. <VirtualHost *:80> DocumentRoot /var/www/cid/app ServerName sub.example.co.nz </VirtualHost> <Directory "/var/www/cid/app"> authtype Basic authname "oh dear how did this get here i am no good with computer" authuserfile /path/to/auth require valid-user RewriteEngine on RewriteCond $1 !^(index\.php|robots\.txt|createEvent\.php|/cgi-bin) RewriteRule ^(.*)$ /index.php/$1 [L] AddHandler custom-php .php Action custom-php /cgi-bin/php53.cgi </Directory> In /var/www/cid/app I have the cgi-bin folder and the php53.cgi that I copied from /usr/local/php53/bin/php-cgi But now when I navigate to the subdomain it says: The requested URL /cgi-bin/php53.cgi/index.php/ was not found on this server. And if I try to browse to /cgi-bin it says (what it is supposed to?): You don't have permission to access /cgi-bin/ on this server. Quite confused now. Anyone know what to do here? Thanks :)

    Read the article

  • Why won't IIS serve my website? - 404 Page Not Found

    - by Giffyguy
    Built a brand new server, with a fresh copy of Windows Server 2003 Enterprise x86 Edition. Installed the .NET Framework 1.1, 2.0, 3.5, and 4.0 Added the "Domain Controller" and "Application Server" roles. Created a new website, pointed it to a local directory: C:\Inetpub\angryoctopus.net\ Added the appropriate headers: angryoctopus.net, www.angryoctopus.net, TCP port 80, all IPs Moved the website content into the local directory. Configured the default document in IIS: Default.aspx Enabled ASP.NET for this website, and set it to the correct version: 2.0.50727 Configured the zone angryoctopus.net in DNS. Tested DNS lookup here to ensure DNS was functional. Opened website in VS 2008 and re-built (and debugged) to ensure the content was functional. I can clearly see that IIS is responding normally, by browsing directly to my server's IP address. Since this does not use the angryoctopus HTTP header, the default website is displayed instead: the "Under Construction" page. And yet, after all of this, angryoctopus.net still returns 404. Does anybody know what could be wrong? What troubleshooting steps have I forgotten? Is there a command-line diagnostic that might provide more information?

    Read the article

  • Restricting Access to Application(s) on Point of Sale system

    - by BSchlinker
    I have a customer with two point of sale systems, a few workstations and a Windows 2003 SBS Server. The point of sale systems are typically running QuickBooks Point of Sale and are logged in with a user who has restricted permissions / access (via Group Policy). Occasionally, one of the managers needs to be able to run a few additional applications -- including some accounting software. I have created an additional user for this manager, allowing them to login and access the accounting software. The problem is, it can be problematic to switch users on the system, as QuickBooks takes a few minutes to close (on POSUser) and then reopen (on ManagerUser). If customers are waiting, this slows things down drastically. Since the accounting software is stored on a network drive, it would be easiest if the manager could simply double click something, authenticate against the network drive / domain controller and then the program would launch. When they close the program, the session to the network drive would be lost and the program would no longer be accessible. Is there any easy way to do this? Both users are on a domain and the system is Windows 7. I just don't want to require the user to switch back and forth. In a worst case scenario, they forget to switch back and leave the accounting software wide open.

    Read the article

  • how to download vim script in command line

    - by HaiYuan Zhang
    whenever I want to install a new vim script to the linux server I'm working in , my typical workflow is as the following: surf the plugin's homepage in vim online using fireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to linux server using winscp which is really inconvenient. I don't know what is the magic behind this : I mean for the same hyperlinki click it in web browser I can let you download it but use wget plus the hyperlink in linux commandline will end up with nothing but error indication. hyperlink in web browser . otherwise I can get the link in web browser and then use wget or some similar tool to actually do the downloding. I try new cool vim scripts quite often , so you can imagin my dismay when have to repeat the tedious action all the time. So if anyone of you knows some tips which can let me downloading the vim scripts in a more "professional" way, I'll appreciate it a lot. post edit : My problem is not find a tool like wget or curl . The problem I met is quite specific to use these tools to download vim script. let's take http://www.vim.org/scripts/script.php?script_id=30 as an example, it's the normal place where one can get the script, at least for me. but I can't find an working url from this page that can feed to wget .

    Read the article

  • Connection problems via Wifi (multiple devices)

    - by Kelvin Farrell
    I'm having connection issues with my router (Linksys WRT610N) at home. There are a number of things that are happening (may be more, this is just what I've mainly noticed)... 1) Using my laptop (Macbook Pro OSX Lion), I am unable to complete any operations with my external FTP server, hosted with FatCow. I can connect to it, navigate through all the files, but when I try to edit/delete/add a file the operation times out. EVERY time. I've used two other Wifi connection on my laptop and neither have this issue. 2) I am unable to upload photos/videos to Facebook or Twitter using my phone (Samsung Galaxy S2) or my tablet (HP Touchpad - CM9). Neither am I able to upload files to Dropbox via either of the devices. Same thing happens in all situations; the upload will begin and it will just hang on 0% forever. After about 10 mins I am always forced to disconnect the Wifi to stop the action. 3) My laptop is having slow internet speed, even though we are on 20mb broadband. Speedtests say I'm getting a good connection and my Ping is good, but when using streaming services like Spotify, it takes forever to load a page and frequently stops to buffer whilst playing a song. Don't know if it's worth mentioning but I have no issues with my XBox (Ethernet), AppleTV (Wifi) or my girlfrield's phone (Nokia Lumia 800 - WP7.5) on the network. I'd really appreciate any help. This is driving me insane and is really affecting both my working and leisure use of the internet.

    Read the article

  • Can I get all active directory passwords in clear text using reversible encryption?

    - by christian123
    EDIT: Can anybody actually answer the question? Thanks, I don't need no audit trail, I WILL know all the passwords and users can't change them and I will continue to do so. This is not for hacking! We recently migrated away from a old and rusty Linux/Samba domain to an active directory. We had a custom little interface to manage accounts there. It always stored the passwords of all users and all service accounts in cleartext in a secure location (Of course, many of you will certainly not think of this a being secure, but without real exploits nobody could read that) and disabled password changing on the samba domain controller. In addition, no user can ever select his own passwords, we create them using pwgen. We don't change them every 40 days or so, but only every 2 years to reward employees for really learning them and NOT writing them down. We need the passwords to e.g. go into user accounts and modify settings that are too complicated for group policies or to help users. These might certainly be controversial policies, but I want to continue them on AD. Now I save new accounts and their PWGEN-generated (pwgen creates nice sounding random words with nice amounts of vowels, consonants and numbers) manually into the old text-file that the old scripts used to maintain automatically. How can I get this functionality back in AD? I see that there is "reversible encryption" in AD accounts, probably for challenge response authentication systems that need the cleartext password stored on the server. Is there a script that displays all these passwords? That would be great. (Again: I trust my DC not to be compromised.) Or can I have a plugin into AD users&computers that gets a notification of every new password and stores it into a file? On clients that is possible with GINA-dlls, they can get notified about passwords and get the cleartext.

    Read the article

  • linux keeps disconnecting from wireless network

    - by Matteo Ceccarello
    I'm running Arch Linux on an Acer laptop and my wirless connection doesn't stay up. After a while it disconnects, and when I try to reconnect I get stuck with a "Waiting for authorization" message. I have to retry several times before getting the connection stay up for few minutes. This happens with both networkmanager and wicd. The strange thing is that the iMac that sits next to the laptop connects fine, and when I use my laptop within the university wireless network it works normally. How can I solve this problem? EDIT: I've tried to connect manually following the steps iwlist wlan0 scan wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf dhcpcd wlan0 and it works, I can ping google. However, looking to wpa supplicant output I see that it keeps connecting and disconnecting. I'm using WPA2, and this seems to be a problem in authentication. EDIT 2: as pointed out in the answers I forgot to mention my hardware/software specifications: kernel: Linux 3.0-ARCH wireless card: # lspci | grep -i net 07:00.0 Network controller: Intel Corporation WiFi Link 5100 module used # lsmod | grep -i 80211 mac80211 216021 1 iwlagn I use a Netgear DGN1000 modem/router My dmseg output is shown here http://pastebin.com/8Tf7iage

    Read the article

  • picking a linux compatable motherboard

    - by Chris
    Last time I bought a new computer (I build them myself) I got a motherboard that had really poor linux support for a long time. Specifically the audio. I had to wait months before the kernel supported the on board audio chipset. That is exactly the situation I'm trying to avoid this time around. I have some specific questions about "server motherboards" actually. I looked at a few models of server motherboards by intel, and some random models on newegg. I wasn't able to see much of a difference from regular desktop motherboard other than most had two sockets, and support for much more ram. These boards seem more popular with Linux users. Why? AMD and Intel both have server CPUs as well. Some question, what's the difference? To make this question more concrete, I was looking at this this motherboard. The main questions about it that I can't answer are: Can I get a motherboard without on board raid and audio? I wanted to get a hardware raid controller and a PCI audio card. I thought a server motherboard would be cheaper and not have these "extras", since who wants an audio card on a server? Where can I found out about Linux support for the components on this board? "Intel ICH10R", "Realtek ALC889", "Marvell 88E8056" I'm buying this computer to work as a Linux desktop for a lot of compiling, coding and audio/video work, but I don't want to rule out the possibility of installing windows and playing some games at one point. (even if the last game I got has been sitting in its box unopened for almost a year). Is it a good idea to buy a "server motherboard" and play games on it, or are desktop boards better value for this? The ultimate solution for me would be a motherboard that had GPL divers for onboard LAN, a single CPU socket, lots of PCI express and PCI. USB 3.0, and no fancy hard disk controllers since I'll be getting a separate one.

    Read the article

  • Setting cfengine3 class based on command output

    - by gnomie
    This question is very similar to How can I use the output of a command in cfengine3 but the answer does not apply in my case I believe. I want to update a git repository via "git pull" and based on whether that lead to changes trigger some follow up action. Simplified, if there was something like "match output and set class" via some body if_output_matches I would want to use something like this: bundle agent updateRepo { commands: "/usr/bin/git pull" contain => setuidgiddir_sh("$(globals.user)","$(globals.group)","$(target)"), classes => if_output_matches("Already up-to-date.","no_update"); reports: no_update:: "nothing updated"; } body contain setuidgiddir_sh(owner,group,folder) { exec_owner => "$(owner)"; exec_group => "$(group)"; useshell => "true"; chdir => "$(folder)"; } So, is it possible to use the output of a - possibly expensive command - and base some decision on that? The execresult function is no good choice for me as a) the pull may become expensive at times (not recommended following the cfengine3 reference) and b) does not allow to specify user, group, working dir - which is important in my case. The repository is in user space and not owned by root.

    Read the article

  • Upgrading MySQL Connector/Net

    - by Todd Grover
    I am trying to publish a website with our hosting provider. I am getting error due to the fact that they only allow a medium trust and the MySQL Connector/Net that I am using requires reflection to work. Unfortunately, reflection is not allowed in a medium trust. After some research I found out that the newest version of the MySQL Connector/Net may solve this problem. Connector/Net 6.6 includes enhancements to partial trust support to allow hosting services to deploy applications without installing the Connector/Net library in the GAC. I am thinking that will solve my problem. So, I unistalled MySQL Connector/Net 6.4.4 and I installed MySQL Connector/Net 6.6.4. When I run the application in Visual Studio 2010 I get the error: ProviderIncompatibleException was unhandled by user code The message is An error occurred while getting provider information from the database. This can be caused by Entity Framework using an incorrect connection string. Check the inner exceptions for details and ensure that the connection string is correct. InnerException is The provider did not return a ProviderManifestToken string. Everything works fine when I have Connector/Net 6.4.4 installed. I can access the database and perform Read/Write/Delete action against it. I have a reference to the following in the project: MySql.Data MySql.Data.Entity MySql.Web My connection string in Web.config <connectionStrings> <add name="AESSmartEntities" connectionString="server=ec2-xxx-xx-xxx-xx.compute-1.amazonaws.com; user=root; database=nunya; port=3306; password=xxxxxxx;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> What might I be doing wrong? Do I need any additional setting(s) to work with version 6.6.4 that wasn't required in the older version 6.4.4?

    Read the article

  • Why *do* windows print queues occasionally choke on a print job

    - by Ian
    Y'know they way windows print queues will occasionally stop working with a print job at the head of the queue which just won't print and which you can't delete? Anyone know whats going on when this happens? I've been seeing this since the NT4 days and it still happens on 2008. I'm talking about standard IP connected laser printers - nothing fancy. I support a lot of servers and loads of workstations and see this happen a few times a year. The user will call saying they can't print. When you examine the print queue, which in my case will generally be a server based queue shared out to the workstations, you find a print job which you cannot cancel. You also can't pause it, reinitialize it, nothing. Stopping the spooler is the usual trick and works sometimes. However I occasionally see cases which even this doesn't cure and which a reboot is the only solution. Pause the queue, reboot, when it comes back up the job can then be deleted. Once gone the printer happily goes back to its normal state. No action is ever necessary on the printer. I regard having to reboot as last resort and don't like it. What on earth can be going on when stopping the process (spooler) and restarting it doesn't clear a problem? Its not linked to any manufacturer either. I've seen this on HPs, lexmark, canon, ricoh, on lasers, on plotters.... can't say I ever saw this on dot matrix. Anyone got any ideas as to what may be going on. Ian

    Read the article

  • DVD drive won't work after installing software

    - by Dan
    DVD drive was already region-free but for some reason would not play a certain DVD as it was the "wrong region". This is the first time I've played a DVD on the drive, but I've imported a lot of CDs before and they always worked fine, even CDs bought from the USA (I live in the UK). To get around this, I downloaded a piece of software called "DVD Region Killer". (Clicking the link won't start the download, so go ahead and check it.) After this, the drive isn't recognised. It won't show up in "My Computer", and when I insert a disc it will start to whir but not take action, i.e. iTunes won't recognise that I have put a CD in. In the Device Manager, the drive shows up with a caution-sign. The device status reads: Windows cannot start this hardware device because its configuration information (in the registry) is incomplete or damaged. (Code 19) Disabling, uninstalling and reinstalling does not help. Clearly the software download is the issue, but it is difficult to remove. The only files I can find in Program Files are: C:\Program Files (x86)\Elaborate Bytes\DVD Region Killer which contains a changelog and a HTML document which has no info on uninstalling. It doesn't show up on "Add or Remove Programs", or even as a background process when I press ctrl-alt-del. Apparently it has no interface as such, and can be accessed by an icon in the system tray, (see review in link) but I don't see the icon. If it helps to know, I have a Dell Inspiron running Windows 8 64-bit, and the model of the DVD drive is: MATSHITA DVD+-RW UJ8C2 Thanks in advance.

    Read the article

  • it opens "open with" prompt whenever scheduled task run

    - by Shashwat Tripathi
    I am trying to run a .vbs file on every five minutes. I am trying to do this using windows task scheduler. In Actions tab - New Action, I select the file ("D:\Documents\FC3 Savegames\FC3.vbs") using open file dialog I have made all other setting properly. But whenever the task begin, It opens open with dialog every time. Once I choose Notepad to in open with dialog. Then Another dialog opens from Notepad saying Can not find D:\Documents\FC3.txt file. Do you want to create a new file with three buttons Yes, No and Cancel Help me what is wrong. I feel that white spaces in the file path causing the problem. Added later Well I just fixed this by setting path to shorthand ("D:\Documents\FC3Sav~1\FC3.vbs"). But it still opens "open with" dialog everytime. Now it has two main programs saying "Keep using Microsoft Windows Script Host" and Other Program. This dialog does not open when I run vbs file directly.

    Read the article

  • MX setup for a domain registrar and web host with the same domain name

    - by Honus Wagner
    I have a client that has registered their domain through a registrar, then signed up for hosting on through a different provider, but used the same domain for said provider (didn't re-register the domain, I think the declaration of domain on the host was for CNAME records specifically). The registrar properly routes his emails at his domain name (email hosted by Google), but the problem is, on the hosted site, when an administration action occurs, he is supposed to get an email stating so. The site is sending him an email with PHP and he never receives an email when its to his address with the same domain name; all other domain addresses work just fine. I have to imagine its something misconfigured on the host. From what I can assume, I think that the host sees that the to and from domains are the same, and it decides not to route the email externally. Currently, the registrar uses the proper nameservers for the host, and there are MX records on both the registrar and the host (they are identical entries). I hope I've been clear in my question. If you need further clarification, or additional information of any kind, I can provide it. Thanks in advance.

    Read the article

  • Restoring Windows 2008 Server X86 and X64

    - by rihatum
    Restoring Windows 2008 Server (Domain Controller) We are using Backup Exec System Recovery 2010 to Image our DC. Now this software has a feature to convert the backup into a vmware or hyper-v VM I have also used disk2vhd to convert one of our dc's to a vhd and when I connected it into Hyper-V, it booted fine, I can login - BUT :-) As soon as I login, I get the activation error, that change product key, this product key isn't good for this machine etc. Question is : When in a real recovery situation, what would be the procedure to restore it either virtual or onto a physical box but be able to login and change product key etc ? In this scenario its just locked down and I cant' do anything, if this is the case, how would I replicate my production environment via these tools ? Any Ideas ? Will be grateful for some real world examples here. Same thing happens with our exchange backup / test restore either physical or virtual, can login but nothing else. Now we don't have the keys as they are OEM keys and just wondering what will happen in a real scenario, would we be purchasing another KEY or using the OEM key on our new server ? This is a test environment I am trying to create by restoring our backups either into hyper-v or physical test machines. Also, If I build up a machine (Server 2008) in a VM (Hyper-V), How can I restore just the system state backup of my DC into it ? will that give me the activation error too ? even though I would use the TRIAL ISOs provided by Microsoft ? Kind regards

    Read the article

  • 10 GigE interfaces limits single connection throughput to 1 Gb on a ProCurve 4208vl

    - by wazoox
    The setup is as follow : 3 Linux servers with Intel CX4 10 GigE controllers and an X-Serve with a Myricom 10 GigE CX4 controller are connected to a ProCurve 4208vl switch, with a myriad of other machines connected through good ol' 1000 base-T. The interfaces are actually set up as 10 Gig, according to both the switch monitoring interface and the servers (ethtool, etc). However a single connection between two 10 GigE equipped machines through the switch is limited to exactly 1Gb. If I connect two of the 10 GigE machines directly with a CX4 cable, netperf reports the link bandwidth as 9000 Mb/s. NFS achieves about 550 MB/s transfers. But when I'm using the switch, the connection tops at 950 Mb/s through netperf and 110 MB/s with NFS. When I open several connections from 3 of the machines to the 4th, I get 350 MB/s of NFS transfer speed. So each individual 10 GigE ports actually can reach much more than 1 Gb, but individual connections are strictly limited to 1 Gb. Conclusion : the 10 GigE connection through the switch behaves exactly like a trunk of 10 1 Gb connections. That doesn't make any sense to me, unless HP planned these ports only for cascading switches or strictly for many-clients-to-single-server connection. Unfortunately this is NOT the envisioned setup, we need big throughput from machine to machine. Is this a not-so-known (or carefully hidden...) limitation of this type of switch? Should I suggest seppuku to the HP representative? Does anyone have any idea on how to enable a proper behaviour ? I upgraded for an hefty price from bonded 1Gb links to 10 GigE and see exactly ZERO gain! That's absolutely unacceptable.

    Read the article

  • Server 2003 PDC DNS not working..Failover server is...

    - by Seth
    In the midst of trying to utilize proc power, i create a fault tolerant DNS server a while ago. Since, Ive been trying to add another controller for exchange. So I thought I would revert back to a single primary DNS for the meantime and now Im balancing on a thread. The server i thought I uninstalled DNS, is still acting as DNS. And now the PDC does not resolve. Can anybody walk me through, Im overwhelmed and cant think straight... Im afraid if anyone restarts their machine they wont have internet. Update Ok so from the beginning. I was configuring Exchange on a new server 2008. How it happened I dont know, but it started to not resolve DNS. (exclamation mark on NIC) even though everything was static. So ultimately I decided to remove the server from the problem, because I noticed DNS was in disarray if I used the DNS IP of the first server. This is when I tested with nslookup on each DNS server. I had uninstalled DNS from the second server, but nslookup was still resolving with that IPaddress, which has me all wound up cause I dont understand. So, since the first DNS server isn't resolving, Im assuming if the second one isnt configured right I'll loose internet. Im just confused and dont know where to start troubleshooting...

    Read the article

  • Application (was Firefox) crash on first load on Ubuntu Linux on older Dell Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then (WAS: everything stops: icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. EDIT : on further investigation, spinning icon, mouse operated by touchpad freeze. There's apparantly a little disk activity occuring about every 5 seconds. I wait 5-10 minutes, behavior doesn't change) A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in). EDIT: I tried to run the Disk manager tool, not that I cared what it was, just a menu-available application. It started up like Firefox, I get a little tag in the lower left saying Disk P*** something had started, and then the same behavior as Firefox. At this point, I don't think its the Ethernet. Is it possible that the Ubuntu disk driver can't handle the disk controller in this older laptop? The install seemed to go fine.

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >