Search Results

Search found 19815 results on 793 pages for 'control'.

Page 640/793 | < Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >

  • Automated software installation for MS Windows?

    - by Duncan Bayne
    I am currently setting up a Windows development environment (the whole Visual Studio 2010 stack plus plugins on top of Windows 7). This has got me wondering whether there's a Windows equivalent to what I do for dev environment setup in Ubuntu. It takes literally hours to get a dev environment set up in Windows, involving a lot of manual intervention. On Ubuntu, I have two shell scripts - one I run as root which configures the system using apt-get (amongst other things), one I run as me which configures my user account. Those scripts live in my private Subversion repository. To set up a dev environment from scratch requires five commands: sudo apt-get install -y subversion svn co http://svn.XXXX.XXX/personal/ cd personal sudo ./ubuntu_setup_root.sh ./ubuntu_setup_user.sh The only human intervention required is to pick a root password for MySQL. So it takes only a few minutes of human attention to go from a vanilla Ubuntu installation to a full development environment with the latest builds of everything, perfectly tailored down to shortcut keys and wallpaper. Is there an equivalent process for Windows? In an ideal world it'd be something trivially scriptable using C# Script or Powershell, which could live in source control & make use of a repository of ISOs downloaded from MSDN ...

    Read the article

  • Can a hardware firewall block a server accessing its OWN UNC shares?

    - by Simon
    I need to set up a UNC share for my hosted dedicated server to access a share on itself. Unfortunately TFS requires a UNC share. I am on a Windows Server 2008 Standard SP2 64bit dedicated server behind a PIX 501 firewall hosted with GoDaddy. I just cannot get the server to access itself and get this error: Windows cannot access \\SERVER\SHARE Check the spelling of the name.. etc. I've found numerous questions about this but no answer to my problem. Server 2008 Standard x64 SP2 Workgroup - not domain Windows Firewall is off Computer browser service is on I am trying to access \\MYMACHINE\TFS-BUILDS by typing in - or double clicking. Neither works. Machine has single network card Filesharing wizard says share was ok Share was showing under 'Computer management' Permissions are set to 'everyone' full control No obvious errors in eventlog Reboot didn't fix it Unfortunately I cannot try to access other shares in or out of this machine because it is a hosted dedicated server and the only machine behind a hardware firewall. The only thing left i can think of is that the hardware firewall needs to be configured. Is this possible? Does 'UNC traffic' go out of the machine and then back in again?

    Read the article

  • Use one NIC to create multiple interfaces for Linux KVM

    - by Phanto
    I am working on a thesis research project, and I am having some difficulty figuring out how to make one NIC spawn several "bridge" interfaces such that each KVM VM can be seen on the local network. I am very new to KVM, and am still exploring what it can do. Below is the scenario that I am attempting to make (on a CentOS/RHEL 6 system): Linux KVM Host has 1 NIC (eth0) connected to a switch. Create multiple "bridge" or equivalent interfaces that are spawned off of eth0 that would provide a unique IP for each VM. This is so that each VM can communicate with other hosts on the network, and that other hosts on the network can communicate with the VM. IMPORTANT: I would like iptables on the KVM host to be able to manipulate/control/restrict the traffic that would be sent on those "bridge" interfaces. I would like to create a minimum of three VM's, each using their own unique "bridge" interfaces. I have previously made a br0 interface off of eth0, but unfortunately, I am unable to add any more to it. It appears that you can only bridge 1 interface to the NIC. I would like to bridge many to one. Would a tap device be able to do this? If so, how would it be set up? Effectively, I am attempting to replicate what can easily be created with VirtualBox on Windows, where each VM is given a "bridged" interface, and can live on the network. I want to achieve this very same thing with Linux KVM. Thank You EDIT: To be more descriptive, I want to achieve something that looks like this: This can be found on this page: http://en.gentoo-wiki.com/wiki/KVM#Networking_2 HOST +---------------+ | | KVM GUEST1 | | +--------------+ | +------+ | | | LAN ---+--- eth0 | +--+---+---- nic0 | KVM GUEST2 | | tap0----+ | |192.168.1.13 | +--------------+ | | tap1----+ | +--------------+ | | | +------+ | | | | | br0 +--+----------------------+---- nic0 | |192.168.1.12 | |192.168.1.14 | +---------------+ +--------------+

    Read the article

  • FreeBSD slow transfers - RFC 1323 scaling issue?

    - by Trey
    I think I may be having an issue with window scaling (RFC 1323) and am hoping that someone can enlighten me on what's going on. Server: FreeBSD 9, apache22, serving a static 100MB zip file. 192.168.18.30 Client: Mac OS X 10.6, Firefox 192.168.17.47 Network: Only a switch between them - the subnet is 192.168.16/22 (In this test, I also have dummynet filtering simulating an 80ms ping time on all IP traffic. I've seen nearly identical traces with a "real" setup, with real internet traffic/latency also) Questions: Does this look normal? Is packet #2 specifying a window size of 65535 and a scale of 512? Is packet #5 then shrinking the window size so it can use the 512 scale and still keep the overall calculated window size near 64K? Why is the window scale so high? Here are the first 6 packets from wireshark. For packets 5 and 6 I've included the details showing the window size and scaling factor being used for the data transfer. Code: No. Time Source Destination Protocol Length Info 108 6.699922 192.168.17.47 192.168.18.30 TCP 78 49190 http [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=8 TSval=945617489 TSecr=0 SACK_PERM=1 115 6.781971 192.168.18.30 192.168.17.47 TCP 74 http 49190 [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=1460 WS=512 SACK_PERM=1 TSval=2617517338 TSecr=945617489 116 6.782218 192.168.17.47 192.168.18.30 TCP 66 49190 http [ACK] Seq=1 Ack=1 Win=524280 Len=0 TSval=945617490 TSecr=2617517338 117 6.782220 192.168.17.47 192.168.18.30 HTTP 490 GET /utils/speedtest/large.file.zip HTTP/1.1 118 6.867070 192.168.18.30 192.168.17.47 TCP 375 [TCP segment of a reassembled PDU] Details: Transmission Control Protocol, Src Port: http (80), Dst Port: 49190 (49190), Seq: 1, Ack: 425, Len: 309 Source port: http (80) Destination port: 49190 (49190) [Stream index: 4] Sequence number: 1 (relative sequence number) [Next sequence number: 310 (relative sequence number)] Acknowledgement number: 425 (relative ack number) Header length: 32 bytes Flags: 0x018 (PSH, ACK) Window size value: 130 [Calculated window size: 66560] [Window size scaling factor: 512] Checksum: 0xd182 [validation disabled] Options: (12 bytes) No-Operation (NOP) No-Operation (NOP) Timestamps: TSval 2617517423, TSecr 945617490 [SEQ/ACK analysis] TCP segment data (309 bytes) Note: originally posted http://forums.freebsd.org/showthread.php?t=32552

    Read the article

  • apache 2.4, mod_proxy_fcgi not honouring .htaccess, work around needed

    - by user229874
    I am using apache 2.4.7 with mod_proxy_fcgi for purpose of passing through php to php-fpm (this will be used for shared hosting environment). The htaccess works fine for non php files, but once it hit rewrite rule that proxies through the php requests, the htaccess is ignored. I know why it is happening. The question is: how do I work around it? The question how do I force apache to treat the request to php file as a request to local file, and then proxy it through? I have spent substantial time in researching on this problem, and following "answers" were given as solution: 1) "use apache configuration instead of .htaccess" it is valid solution, but not for shared hosting environment (I am not going to give access to apache configuration to shared hosting customers ;)). 2) "don't use .htaccess, as it has performance/security/other issues", well how else would shared hosting customers control access/url rewriting on their site? Besides if the .htaccess was not a requirement I would simply use nginx. 3) "put rewrite rule for proxy inside of " - this is incorrect, and it does not work. This behaviour appears to be not a bug but a "feature" as per https://issues.apache.org/bugzilla/show_bug.cgi?id=54887

    Read the article

  • Remote Desktop Network Level Authentication Not Supported

    - by Iszi
    I'm running Windows XP Professional SP3 x86, trying to connect to a system with Windows 7 Ultimate SP1 x64. Recently, I updated the Remote Desktop Connection software on the XP system in hopes of using Network Level Authentication (NLA) for my connections to the Windows 7 box. After the update, I connected to the Windows 7 box over RDP and enabled NLA believing that the updated client should support it. After disconnecting and attempting to reconnect, I'm presented with the following error: The remote computer requires Network Level Authentication, which your computer does not support. For assistance, contact your system administrator or technical support. So, I checked the About page in Remote Desktop Connection to make sure the update had applied. This is what I see. Remote Desktop Connection Shell Version 6.1.7600 Control Version 6.1.7600 © 2007 Microsoft Corporation. All rights reserved. Network Level Authentication not supported. Remote Desktop Protocol 7.0 supported. I thought NLA was supposed to be a part of RDP 7.0 clients. Is there a component I'm missing somewhere?

    Read the article

  • DNS NAmeserver Aname and cname records

    - by David
    Hi - I am inexperienced in the configuration of DNS and have an issue with dominan hosting set up. I have two domains 'www.mydomain1.com' and 'www.mydomain2.com', with mydomain2 pointed at the same place as mydomain1. The domains were passed to me recently by the person who previoulsy controlled them. I have an account with fasthosts in the uk. When I accepted the domains I could not access the DNS settings and enquired with fasthosts as to why. The replied saying 'The delegate hosting option for both domains were enabled and this is the reason why you were unable to find the option to edit the advanced DNS records. I have now disabled the delegate hosting option so you can now edit the advanced DNS records for both domains in your account.' When i log into the fasthost control panel now i can access the DNS controls but both domains have no A Record of Cname record set up. I am concerned that fasthosts have blatted the previous Nameserver entries and set me up on theirs but not added any record. 'www.mydomain1.com' currently still works but 'www.mydomain2.com' does not find the site anymore. i am worried i will lose mydomain1 to as teh dns changes filter through the system. my webhosting is at 'xxx.xxx.xxx.xxx/mydomain1.com/' and this is where I want both domains to point. Any advice would be much appreciated. one thing which is confusing me is that because I am on a shared server I have to put 'xxx.xxx.xxx.xxx/mydomain1.com/' to get to my site rather than just 'xxx.xxx.xxx.xxx'. The form on fasthosts for the aname record only allows an IP to be entered - does it add the mydomain1.com/ onto the end itself? Thanks for any help given - I'm quite worried about this David

    Read the article

  • cloning a kvm guest os to a vmdk file

    - by Bond
    I have a production environment where I am having 4 Guest OS running on a Ubuntu server which uses kvm. These OS are in an LVM based setup.I want these Virtual Machines to be in a vmdk format also.Where people would do experiments with these Virtual Machines so this in a vmware environment (or it can be Xen too) would be different from the kvm server.I would not have any control on that other environment so I want to give people vmdk images of these virtual machines. The production Virtual Machines will still keep running on kvm server but the VMs on which experiments would be done would be of type vmdk.(vmdk is a constraint) Here is output of lvscan ACTIVE '/dev/abcd/lvm1' [100.00 GiB] inherit ACTIVE '/dev/abcd/lvm2' [150.00 GiB] inherit ACTIVE '/dev/abcd/lvm3' [50.00 GiB] inherit ACTIVE '/dev/abcd/lvm4' [100.00 GiB] inherit I was reading man page of qemu-img and what I understand is I need to first create a qcow image file which I need to populate and then convert that to a vmdk file. Is that understanding correct? Now suppose /dev/abcd/lvm4 is the virtual machine with which I am going to start this experiment.I can shutdown the production VMs for some time to do this. So is the following way correct to go on server 1 (where kvm is running) qemu-img convert -c -f raw -O vmdk /dev/abcd/lvm4 /backup/lvm4.img or it will affect the lvm4 on kvm server 1. I do not want the VM running on original server to at all loose its any of the content but also have a vmdk file for each of the Guest OS on kvm. Before I proceed with any of the above things on the production machine I just want to make sure that I am doing the correct thing so I asking here.

    Read the article

  • Win 2008 R2 terminal server and redirected printer queue security

    - by Ian
    I have a case where I need a non-priv account to be able to make a modification to the redirected printer. I know, its not advisable but we're not giving them access - changes will be made in code. So, following the docs (http://technet.microsoft.com/en-us/library/ee524015(WS.10).aspx) I modified the default security for new printer queues. This doesnt work though as windows doesn't seem to assign the privs you configure in the printer admin tool to redirected printer queues. As I test I added a non-priv test user to the default security tab in the printer admin tool (control panel - admin tools - printer admin. I assigned it all privs (its a test) and logged the user into the terminal server. The redirected printers duely appeared as usual. However if I open the printer properties - security tab, the user appears in the list of accounts/groups but the options I selected (all privs) are not set. Instead the user special privs box is marked and when I click on 'advanced options' and view them, there is nothing marked. So, something is clearing these options.... the question is, why and how can I convince it not to? Ian

    Read the article

  • Getting 2560x1600 out of an ATI Radeon HD 4670 on Windows 7

    - by Alexey
    Greetings, I've got a Dell Studio XPS 1640 laptop (with an ATI Radeon HD 4670 graphics card) running Windows 7, and just bought a Dell 3007HPC 30-inch monitor for it. I'm trying to figure out how to get the full 2560x1600 experience out of this setup. Here's what I've done so far: Plug in using an HDMI cable and an HDMI--D-DVI converter on the monitor side. Open up Screen Resolution. Maximum supported setting is 1920x1080. Tried that (several times) - sometimes it doesn't work at all (blank screen); other times, it only shows the first 1280x800 pixels on the bigger screen. Tried using the Catalyst control center - played with various settings there, couldn't get the screen to show anything interesting. Tried using PowerStrip to set a custom resolution, again, no luck. Spoke to a Dell Preferred Custom Support guy for about an hour before giving up. He remote-accessed my computer, and told me that (1) The maximum supported resolution for XPS 1640 is 1920x1080, and (2) 'it seems to be working from where he sees it, must be a connection issue'. None of this has helped. Does anybody have ideas? Should I be using a different cable set up? Am I using Powerstrip wrong?

    Read the article

  • Apt pin and self hosted apt repo

    - by Hamish Downer
    We have our own apt/deb repository with a handful of packages where we want to control the version. Crucially this includes puppet, which can be sensitive to versions being different. I want our desktops to only get puppet from our repository, but also for people to be able to add their own PPAs, enable backports etc. The current problem we have is backports on Ubuntu Lucid. Some important lines from /etc/apt/sources.list: deb http://gb.archive.ubuntu.com/ubuntu/ lucid main restricted universe multiverse deb http://gb.archive.ubuntu.com/ubuntu/ lucid-updates main restricted universe multiverse deb http://gb.archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu/ lucid-security main restricted universe multiverse deb http://deb.example.org/apt/ubuntu/lucid/ binary/ And in /etc/apt/preferences.d/puppet: Package: puppet puppet-common Pin: release a=binary Pin-Priority: 800 Package: puppet puppet-common Pin: release a=lucid-backports Pin-Priority: -10 Currently policy says: $ sudo apt-cache policy puppet puppet: Installed: (none) Candidate: (none) Package pin: 2.7.1-1ubuntu3.6~lucid1 Version table: 2.7.1-1ubuntu3.6~lucid1 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid-backports/main Packages 100 /var/lib/dpkg/status 2.6.14-1puppetlabs1 -10 500 http://deb.example.org/apt/ubuntu/lucid/ binary/ Packages 0.25.4-2ubuntu6.8 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid-updates/main Packages 500 http://security.ubuntu.com/ubuntu/ lucid-security/main Packages 0.25.4-2ubuntu6 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid/main Packages If I use n= instead of a= then I get Package pin: (not found) I'm just plain confused at this point as to what I should use. Any help appreciated.

    Read the article

  • creating metapackages

    - by olpanis
    got some problems while creating metapackages tried to create a meta package containing forensic toolkits, on ubuntu i even got problems creating an .deb file. dpkg-source: error: can't build with source format '3.0 (quilt)': no orig.tar file found dpkg-buildpackage: error: dpkg-source -b forensics-0.1 gave error exit status 255 later i tried it using debian 5, creating the deb works, but there i got into some problems with Depends debian:/home/matthias/Desktop/meta# dpkg --install *.deb Wähle vormals abgewähltes Paket meta. (Lese Datenbank ... 96897 Dateien und Verzeichnisse sind derzeit installiert.) Entpacke meta (aus meta_0.1-1_i386.deb) ... dpkg: Abhängigkeitsprobleme verhindern Konfiguration von meta: meta hängt ab von python (>= 2.6.6-2); aber: Version von python auf dem System ist 2.5.2-3. dpkg: Fehler beim Bearbeiten von meta (--install): Abhängigkeitsprobleme - lasse es unkonfiguriert Fehler traten auf beim Bearbeiten von: meta the file control looks like this: Source: meta Section: unknown Priority: extra Maintainer: root <[email protected]> Build-Depends: debhelper (>= 7) Standards-Version: 3.7.3 Homepage: <insert the upstream URL, if relevant> Package: meta Architecture: any Depends: python (>= 2.6.6-2) Description: short description forensic toolkits any ideas? kind regards

    Read the article

  • Sync Two Exchange accounts or Ready Only access to subfolders

    - by cpgascho
    This is two questions kind of. The situation is as follows. I am running SBS 2008 with Exchange 2007. There is a shared account which has subfolders to keep track of the process of jobs that are coming into the company (ie: sales) I need to give other people in the company read access to this mailbox not full control. When I give ready only access to the root other users can only see the Inbox and not subfolders. Permissions have to be applied to each folder. One solution I have considered is creating a secondary mailbox that everyone could have full access too which would have a one way sync from the sales mailbox to the secondary mailbox. Then people could see what was happening without messing up the main mailbox by accident (at worst they would mess up the secondary mailbox) Ideally I could find a way to propgate the READ ONLY Permissiosn to all the subfolders. I have tried using PFDavAdmin to do this but have not been able to get it to connect successfully from Windows 7 To Exchange 2007 Any idea on how to 1. Propogate permissions (get PFDavAdmin to work??!) 2. Sync mailboxes 3. Other solution? Thanks Chris

    Read the article

  • Usb device not working properly on a Thinkpad T60

    - by Xavierjazz
    I have just started getting a message that a USB device is not working properly. This is on a new Thinkpad T60 running Windows XP Professional SP3. I have had all devices attached for about 10 days or so. When I go to device manager there is no sign of a problem: everything is working correctly. I am unable to find out which device it is. I searched this forum but the only reference I found was to older computers which may not be Usb2 capable. This is not my problem. Any ideas? Thanks. EDIT: I realized I had a harddrive attached but turned off. Although I could not track down any error messages except the balloon that came up, I have disconnected it and, so far, no messages. We'll see over the next day or two, but this may be the problem. Thanks EDIT: This has not solved the problem. I get the message that one of the USB devices has malfunctioned and it points to "USB root hub (2 ports)" and shows an unused port and an unknown device. However, when I check my device manager, it says that there are no problems, everything is working as it should. ?? EDIT: I now found the event log view and there are 2 types of error messages. They do not relate to the time that I get the balloon. 2 of the 3 are "Ati2mtag" errors and the third is "System Control Manager". Are these related to my problem, and the balloon just pops up randomly? EDIT: well, I'm still having the problem, and have narrowed it down to a malfunctioning device. Thanks to all.

    Read the article

  • Ubuntu 13.04 to 13.10: Filesystem check or mount failed [migrated]

    - by SamHuckaby
    I attempted to upgrade from Ubuntu 13.04 to 13.10 today, and mid upgrade the system started flaking out, and eventually locked up entirely. I was forced to restart the computer, and am now unable to get the computer to boot up at all. When I boot currently, it takes me to the GRUB menu, and I can choose to boot normally, or boot in an older version. I have tried several things, which I list below, but no matter what, when I try to finish booting into Ubuntu, I receive the following error: Filesystem check or mount failed. A maintenance shell will now be started. CONTROL-D will terminate this shell and continue booting after re-trying filesystems. Any further errors will be ignored root@ubuntu-computername:~# I have fun fsck -f and everything appears correct, no errors are reported. and it passes all 5 checks. If I run fdisk -l then I get the following information: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes Disk identifier: 0x00010824 Device Boot Start End Blocks Id System /dev/sda1 * 2048 608456703 304227328 83 Linux /dev/sda2 608458750 625141759 8341505 5 Extended Partition 2 does not start on physical sector boundary. /dev/sda5 608458752 625141759 8341504 82 Linux swap / Solaris Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0fb4b7e8 Device Boot Start End Blocks Id System /dev/sdb1 8192 625139711 312565760 7 HPFS/NTFS/exFAT I am considering just installing a new OS on the other disk, that currently has nothing on it, and then just attempting to scrape my data off the old disk (thankfully I didn't encrypt the files). Really my question is this: Can I salvage this Ubuntu install, or should I give up and just reinstall?

    Read the article

  • Make Google Chrome's address bar prefer page titles to domain names when offering completions?

    - by Ryan Thompson
    I've recently switched from Firefox to Chrome, and the thing I miss most from Firefox is the "Awesome Bar" that suggests completions for what I type primarily based on page titles, and then secondarily based on domain names. Chrome offers both matching URLs and titles, just like like Firefox, but Chrome seems to always prefer a matching domain name over a matching page title or a match to another part of the URL (besides the domain), no matter how many times I pass over the former for the latter. In fact, Chrome also prefers to suggest a search rather than matching anything other than a domain name. So is there any hidden preference I can change to tell Chrome that I care more about page titles than domain names? Example: I want to go to Google Reader, so I press Control+L and begin typing "reader". The URL for google reader is http://www.google.com/reader/view/#overview-page, so the domain name is www.google.com, which does not contain the word "reader". So the first option that Chrome suggests is either another site that has "reader" as part of the domain, or a search for "reader" with the default search engine. No matter how many times I scroll down and select Google Reader, Chrome never "learns" that that's what I want.

    Read the article

  • No signal on monitor after plug it to a linux box

    - by yaroot
    I use my old computer as an NAS, so I remove the monitor after I installed linux on it (disconnect vga cable). I use ssh to control the machine and it works fine. Until some day, after kernel/softare upgrade or messing up some configs, I cannot connect to it through ssh, then I have to plug the monitor back, but the monitor says "No input signal". So I have to restart the computer WITH the monitor connected, and the monitor's back! I think the computer/linux kernel doesn't detect the monitor plug-in event. So how can I start my linux box without a monitor, but when it goes wrong I can still plug my monitor (vga) back and use the console. Edit: just one pci-e video card, has dvi, vga, tv/out (s-video) Edit2: Xorg is not running. I just need the console (CTRL+ALT+F1). The problem is, if the machine booted without a monitor connected, it won't give me a pseudo terminal after I attach the vga cable while it's running. Clearly the monitor is not auto detected as usb device. I'm wondering how to let the monitor auto detected.

    Read the article

  • CryptSvc not matched by Windows 7 Firewall rule

    - by theultramage
    I am using Windows Firewall in conjunction with a third-party tool to get notified about new outbound connection attempts (Windows Firewall Notifier or Windows Firewall Control). The way these tools do it is by setting the firewall to deny by default, and to add an auditing policy to log blocked connections into the Security event log. Then they watch the log, and display notification about newly added entries. netsh advfirewall set allprofiles firewallpolicy blockinbound,blockoutbound auditpol /set /subcategory:{0CCE9226-69AE-11D9-BED3-505054503030} /failure:enable With this configuration in place, I now need to craft outbound allow rules for applications and system services. Here is the rule for CryptSvc, the service frequently used for certificate validation and revocation checking: netsh advfirewall firewall add rule name="Windows Cryptographic Services" action=allow enable=yes profile=any program="%SystemRoot%\system32\svchost.exe" service="CryptSvc" dir=out protocol=tcp remoteport=80,443 The problem is, this rule does not work. Unless I change the scope to "all programs and services" (which is really unhealthy), connection denied events like the following will keep appearing in the security log: Event 5157, Microsoft Windows security auditing. The Windows Filtering Platform has blocked a connection. Application Information: Process ID: 1476 (<- svchost.exe with CryptSvc and nothing else) Application Name: \device\harddiskvolume1\windows\system32\svchost.exe Network Information: Direction: Outbound Source Address: 192.168.0.1 Source Port: 49616 Destination Address: 2.16.52.16 Destination Port: 80 Protocol: 6 (<- TCP) To make sure it's CryptSvc, I have let the connection through and reviewed its traffic; I also configured CryptSvc to run in its own svchost instance to make it more obvious: ;sc config CryptSvc type= share sc config CryptSvc type= own So... why is it not matching the firewall rule, and how to fix that?

    Read the article

  • The RTL8111/8168B NIC under Linux and the r8168 driver

    - by nik
    So I've got one of the infamous R8168 Realtek ethernet NIC, which have some problems under Linux. After some research, I found out I had to use the r8168 driver for this card (and not the r8169 which still loads when nothing else is available), which I did. So now everything works fine... Sort of. My download and upload rates are more than halved compared to what I should get. When I test (with eg. speedtest) I get something like 20M (often 15M) in download and 30M in upload, but if I test under Windows (everything is otherwise identical: same ethernet cable, same connection, at the same time of the day (well 5 min apart)...), I get 50M upload/download (which is what I expect). Where can it come from? Here's some info: ~ # lspci [...] 06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) ~ # modinfo r8168 filename: /lib/modules/3.2.1-gentoo-r2/net/r8168.ko version: 8.027.00-NAPI license: GPL description: RealTek RTL-8168 Gigabit Ethernet driver author: Realtek and the Linux r8168 crew <[email protected]> srcversion: 0A6E9F1D4E8E51DE4B6BEE3 alias: pci:v00001186d00004300sv00001186sd00004B10bc*sc*i* alias: pci:v000010ECd00008168sv*sd*bc*sc*i* depends: vermagic: 3.2.1-gentoo-r2 SMP mod_unload [...] ~ # mii-tool -v eth0: negotiated 100baseTx-HD, link ok product info: vendor 00:07:32, model 17 rev 4 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD

    Read the article

  • AWS document on number of databases allowed on an Amazon RDS instance

    - by user35042
    At the Amazon RDS FAQ there is the question "What is a database instance (DB Instance)?". The entire answer (as of mid-June 2012) is: You can think of a DB Instance as a database environment in the cloud with the compute and storage resources you specify. You can create and delete DB Instances, define/refine infrastructure attributes of your DB Instance(s), and control access and security via the AWS Management Console, Amazon RDS APIs, and Command Line Tools. Multiple MySQL databases or SQL Server databases (up to 30) or Oracle database schemas can be created on a given DB Instance. The last part of that quote, "Multiple MySQL databases or SQL Server databases (up to 30) Oracle database schemas" I interpret to mean that you can have an "unlimited" number of databases on an RDS MySQL or Oracle instance but only 30 databases on an MS SQL Server instance ("unlimited" meaning not limited by the RDS infrastructure itself). This was asked in the Stackoverflow question Does Amazon RDS support multiple databases per instance?. The answer quoted an older version of the FAQ. What I am looking for is an Amazon document that clarifies this question, or else someone who has experience using Amazon RDS who can attest what the situation actually is.

    Read the article

  • Unable To Install SQL Server 2008 On Windows 7 & Windows XP

    - by mickburkejnr
    Hi Guys, you are my final hope of salvation! For the last five hours I've been trying to install SQL Server 2008, first on Windows 7 and then Windows XP. Both have had the same issue. Even before the installation starts, an error message appears saying: Microsoft .NET Framework 3.5 SP1 installation has failed. SQL Server 2008 Setup requires .NET Framework 3.5 SP1 to be installed. I have installed everything in the hope of getting past this. the service pack it refers to has even been installed and I can see them in the control panel! I have installed Framework 2.0, Framework 2.0 SP1, Framework 3.5, Framework 3.5 SP1, Windows Installer 3.5 and Windows Installer 4.0. Even installed the Service Pack for SQL Server 2008.... But, nothing, works! Does anyone have an idea what I'm doing wrong or what could be wrong? I'm seriously close to the end of my tether, I've even sent Steve Ballmer an email with my frustration. I would seriously appreciate some help with this. Many thanks!

    Read the article

  • Cannot resolve Hostname to IP, but IP to hostname works

    - by dotnetdev
    I have deployed a bunch of windows server VMs on a cloud hosting service. These machines are all joined to a domain controller on the same service, which also hosts DNS. All of the domain-joined machines have dynamic IP (along with the DC). If I try to resolve any of the hostnames remotely, it fails. For example, I am in SQL Server Reporting Services and I need to connect to a remote server. I provide the hostname of the desired target server and this fails, but then if I provide the IP, this works. How can I pass the hostname and have this resolve to IP? Is there anything I need to look for in the DNS server? It has records of the hostnames (in forward lookup I think), but reverse is empty. Isn't it the case that forward lookup resolves ip to hostname and reverse resolves hostname to ip? Also, I don't know what he subnet mask because this is not in my control, so the machines may not be in the same subnet - can this be a cause of the problem? Where is the problem? Thanks

    Read the article

  • Migrating email forwarding entries from DirectAdmin to Google App (Free edition)

    - by bobo
    I have a website hosted in a shared hosting account and it contains a DirectAdmin (DA) control panel. From there, I can see some email forwarding entries. I would like to migrate the email server to the Google App's, I am going to change the MX records to point to Google email server in the DA. For the existing email accounts that I see in the DA, I will re-create them in the Google App. But for those email forwarding entries, I am confused. If I keep them there, will they still work after I have changed the MX record pointing to the Google email server? If not, this means I will need to re-create them in the Google App, right? Unfortunately, Google App (Free edition) does not seem to allow email forwarding like those in DA. Unless I choose to use other editons (http://www.google.com/support/a/bin/answer.py?answer=175745). In DA, when I have created an email forwarding entry such as [email protected] - [email protected], I do not really need to create a dummy [email protected] email account and DA will still do the forwarding properly. The best I can do now, without upgrading the Google App edition, is to simply create dummy email accounts in the Google App and setup forwarding inside that email account, is this correct?

    Read the article

  • Need help with MS Access 07 & Reports

    - by Moe
    Hey there, I'm finding it difficult to get MS reporting working to what I'd like to show. What I'm trying to do is: a) In my database store a URL file (HTTP external file), that is a .jpeg. I'd like to use that URL to call the image on the report sheet. I have tried to use 'Control source' on the data panel, but with no success. Any way I can get Dynamic Images to show up on each database. Also, I have a couple of Relational Databases. One Defines Values: For Example: DefinePets('petID','Name of Pet') The other one links the Main DB with the 'DefinePets' database. Eg: connect('petID','mainID','extraFeild') I'd like my report to Go into the "connect" Table, where the the currently viewed Record Value = mainID, then find petID and return Name of Pet. There is a many to many link between definePets and the main Table. (Therefore connect is joining them up) Or is that too much to ask from a simple package like Access? Thanks.

    Read the article

  • SQL Server Backup problem when browsing to the directory

    - by Richard West
    I want to allow a group (eg. 'BackupManagers') who can only preform backup and restore operations on certain databases. When creating the BackupManagers user account I checked db_backupoperator. When the user logs in to create a backup they get an error message similar to the following when the select Tasks - Backup - Click on Add in the destiantion block - click on the "..." button to browse TITLE: Locate Database Files - MYSERVER\SQL2005 E:\MSSQL\Backup Cannot access the specified path or file on the server. Verify that you have the necessary security privileges and that the path or file exists. If you know that the service account can access a specific file, type in the full path for the file in the File Name control in the Locate dialog box. I have confirmed that the user has permissions to the folder. I have even created a share to this folder and had them access it through explorer. They are able to create and delete files within the folder. I have found that if they type in the path to the file instead of using the "..." button to browse the directory tree then they can create a backup file fine. Why is the browse button not working as expected? Thanks!

    Read the article

< Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >