Search Results

Search found 34186 results on 1368 pages for 'single machine'.

Page 272/1368 | < Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >

  • Network card very slow, only on Windows

    - by J Penguin
    This only happens to 1 of my machine, and only when booting into Windows 7. No matter what network card I put in, Windows would default its mode to 10Mbps full duplex. Transfer speed is approximately 1 MB/s. If I set it to 100Mbps, the transfer drops to 100-200K/s. If I set it to 1000Mbps, the connection is lost completely. I've tried swapping in different cards, both PCI-E and PCI. I'v etried update the windows, I've tried reinstalling the drivers... On this very same machine, if I boot into Fedora, it can use the card at its full capacity 1000Mbps transfering 80+ MB/s And all the cards work just fine when plugging into other machines on the same network. I'm very curious. What could be the reason for this? The only different software that this machine has is virtual box with a VPN emulator, but disabling that VPN doesn't seem to do anything. I would like to get this fixed, hopefully, without reinstalling windows _< Will that be possible?

    Read the article

  • administrator user unable to login, suspicious user accounts "sky$", "admin$"

    - by mks
    I have a Windows 2008 R2 Standard (64 bit) running in a virtual machine. Suddenly from yesterday onwards I am not able to login as administrator. Nobody changed the password. Both in the console as well as using remote desktop I am unable to login. Whenever I login as Administrator I am getting this error: "The user name or password is incorrect" Nothing has changed in the machine and I have logged in the past successfully both through console and via remote desktop several time on the same machine. One strange behaviour I noticed is, I am seeing some additional user accounts if I try to login as other user. The suspicious user account are: sky$ admin$ SUPPORT_388945a0 Is it created by some malware/virus? Or is it some windows hidden account? Microsoft site says that SUPPORT_388945a0 is: The Support_388945a0 account enables Help and Support Service interoperability with signed scripts. This account is primarily used to control access to signed scripts that are accessible from within Help and Support Services. Administrators can use this account to delegate the ability for an ordinary user, who does not have administrative access over a computer, to run signed scripts from links embedded within Help and Support Services. These scripts can be programmed to use the Support_388945a0 account credentials instead of the user’s credentials to perform specific administrative operations on the local computer that otherwise would not be supported by the ordinary user’s account. When the delegated user clicks on a link in Help and Support Services, the script executes under the security context of the Support_388945a0 account. This account has limited access to the computer and is disabled by default. However I am not sure from where this "admin$" and "sky$" came. Anyone has similar experience?

    Read the article

  • Wastage of resources in Virtualization

    - by Sabeen Malik
    I have asked this question on SO, but was suggested that i ask it here on SF, so here it goes. http://stackoverflow.com/questions/3010753/wastage-of-resources-in-virtualization I am not sure if this is the right place to ask the question. However i hope it is. When looking for a VPS earlier today, I was trying to understand how each container would work in the background. Keeping in mind the fact that the operating system uses most of the memory and power on a system, wouldn't having multiple operating systems in the same machine mean more wastage of resources. For instance if i was running centOS on a dedicated box and it was running lets say 20 background OS level processes. Then i go and install a virtualization platform and install 5 more centOS virtual machines in the same system which are exactly the same as the host operating system. Doesn't this mean duplication of those 20 processes 6 times? So internally the context switching is happening between 120 processes instead of 20? Further Notes: Here is an example of what i am thinking: I have a master-slave configuration for a long running, cpu + memory intensive process, which can be distributed to 4 machines. Lets say when the process runs on these 4 machines with lets say 1 Gh CPU and 1 Gig RAM, i get 400 results per hour from the cluster (assuming 100 results from one machine) . Now i get a bigger machine ( lets say 4Gh and 4 Gig RAM), have 4 virtual hosts on it with 1 Gz CPU and 1 Gig RAM. Will this configuration give me the same 4 results per hour from these 4 virtual hosts?

    Read the article

  • SSH multi-hop connections with netcat mode proxy

    - by aef
    Since OpenSSH 5.4 there is a new feature called natcat mode, which allows you to bind STDIN and STDOUT of local SSH client to a TCP port accessible through the remote SSH server. This mode is enabled by simply calling ssh -W [HOST]:[PORT] Theoretically this should be ideal for use in the ProxyCommand setting in per-host SSH configurations, which was previously often used with the nc (netcat) command. ProxyCommand allows you to configure a machine as proxy between you local machine and the target SSH server, for example if the target SSH server is hidden behind a firewall. The problem now is, that instead of working, it throws a cryptic error message in my face: Bad packet length 1397966893. Disconnecting: Packet corrupt Here is an excerpt from my ~/.ssh/config: Host * Protocol 2 ControlMaster auto ControlPath ~/.ssh/cm_socket/%r@%h:%p ControlPersist 4h Host proxy-host proxy-host.my-domain.tld HostName proxy-host.my-domain.tld ForwardAgent yes Host target-server target-server.my-domain.tld HostName target-server.my-domain.tld ProxyCommand ssh -W %h:%p proxy-host ForwardAgent yes As you can see here, I'm using the ControlMaster feature so I don't have to open more than one SSH connection per-host. The client machine I tested this with is an Ubuntu 11.10 (x86_64) and both proxy-host and target-server are Debian Wheezy Beta 3 (x86_64) machines. The error happens when I call ssh target-server. When I call it with the -v flag, here is what I get additionally: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /home/aef/.ssh/config debug1: Applying options for * debug1: Applying options for target-server.my-domain.tld debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/home/aef/.ssh/cm_socket/[email protected]:22" does not exist debug1: Executing proxy command: exec ssh -W target-server.my-domain.tld:22 proxy-host.my-domain.tld debug1: identity file /home/aef/.ssh/id_rsa type -1 debug1: identity file /home/aef/.ssh/id_rsa-cert type -1 debug1: identity file /home/aef/.ssh/id_dsa type -1 debug1: identity file /home/aef/.ssh/id_dsa-cert type -1 debug1: identity file /home/aef/.ssh/id_ecdsa type -1 debug1: identity file /home/aef/.ssh/id_ecdsa-cert type -1 debug1: permanently_drop_suid: 1000 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.0p1 Debian-3 debug1: match: OpenSSH_6.0p1 Debian-3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent Bad packet length 1397966893. Disconnecting: Packet corrupt

    Read the article

  • When did my LaTeX files become TeX files!?

    - by andrz_001
    After transferring all my files onto a new machine, all files that were once LaTeX files (having the texnic center "T" icon) are now TeX files (having the TeX works icon --I don't remember installing that one!). But....the files are associated with the technic center! In other words, the files open with the texnic center, yet the "type of file" is "tex document". I'm using the same miktex distrubution and texnicCenter (both for windows xp) as on the old machine. Again, I don't know how/where I got this texworks on the new machine--assuming it came with windows. A question: Could these tex associations cause me any new surprises, anything unexpected? Like certain packages not working, etc? Because I can't have that!! Should I not fix it if it ain't broken!? For instance, one particular file yesterday would not produce any pdf output after compiling...after trying many things (other files compiled as they should have), I got to thinking to try it without the setspace for doublespacing!! And it worked. I have no idea why. Anyway.... Real question: How to revert to LaTeX file associations?? Rhetorical question: Would this question have better luck in ctan or stackoverflow? I guess I'll find out! Thank you wholeheartedly!

    Read the article

  • Lenovo S10 Ideapad will not boot while original hard drive is installed, neither from hard drive or

    - by aki
    Hello, first time posting here so I'll try to be very clear. I have a Lenovo S10 Ideapad netbook which fails to boot to an OS. It shows the Lenovo splash screen and can get to the BIOS but it doesn't get to GRUB (was dual booting Ubuntu 9.10 and Win 7, was working fine for months, ie this isn't a new dual boot gone bad). After the splash screen it displays a flashing cursor in the upper left corner. Power cycled to no avail. Here is what I have done trying to narrow the problem down: The machine will boot to Ubuntu using an install/live USB drive, but only if ANOTHER hard drive is installed or NO hard drive is installed. The boot order always lists USB first. Also, there is a 2 gb RAM upgrade but I think that's fine; the Ubuntu USB drive boots fine with it, and "free" sees the whole 2gb of memory. So it seems like the hard drive is bad. I was able to put the bad drive in a different laptop and mount it to recover files. I'm ready to replace the bad hard drive, but I would like to know if this situation makes any sense. If the hard drive is bad, shouldn't I still be able to boot with the Ubuntu USB drive while the bad drive is installed? I would have expected the machine to boot into Ubuntu anyway even if with a bad drive, since the boot order lists USB first. But it seems that when the bad drive is installed, the machine ignores the USB drive and hangs with the flashing cursor. Thanks for any ideas! Sorry for the long post, I just want to put all the info I have up front! Basically I'm going to buy a new drive, but I am mostly curious if this is a typical or at least not unusual situation.

    Read the article

  • Mac Backup Plan

    - by Chuy77
    I'm reviewing my backup plan and would appreciate any thoughts about what more I should do (if anything) to make sure I'm properly covered in case of all hell breaking loose. :-) I have one machine. 1) I run a nightly clone with SuperDuper. I alternate the clone drive weekly so I have two clones, one never more than a week old. 2) I use BackBlaze as a sort of Time Machine in the cloud. It runs all the time and keeps everything on my machine backed up online. 3) I sync all my 1Password logins, etc. to my iPhone once a week. ...And that's it. I feel pretty covered. But I'm always reading stuff like this: http://www.43folders.com/2010/03/15/yes-another-backup-lecture And that doesn't even mention online backup, and seems like a huge pain in the behind. But maybe I'm being naive? Should I have more backups? Thanks for any feedback. I really appreciate it.

    Read the article

  • RDP Connection to Windows 7 stays really slow

    - by Pavlo
    I have an Issue with connecting to Windows 7 via RDP. I can open an RDP Session, but regardless of any settings, the responce times are really long. This in particulary is the case when opening a web page in a browser. I've tried IE, Firefox and Google Chrome. I also use RDP connection to a Windows 2008 Server from the same client machine, and the speed is very normal with all features turned on. We have Gigabit Ethernet here. So I think it can not be the client's fault. What concerns Windows 7 Machine, I've tried shutting all the sraphic features off and turning the color levels to 256 colors. Result - the same. If I work locally on the machine - I can not see any lags. What else have I tried: Using old RDP 5 Client from Microsoft Setting network autotuninglevel as seen here Do You have some ideas? Thanks in advance! Update the problem seems to be with rendering window contents. All the window borders and pannes are rendered pretty quickly, but the content shows up very slowly. Also mouse movements are recognised by the Win 7 box only after some period. Are there some hidden settings in the RDP, where one could turn some advanced features off or turn some caching on? I use Bitmap Caching, but this apparently doesn't help.

    Read the article

  • How can I recover a Fedora 12 installation that is showing signs of disk errors?

    - by Bob Cross
    I am currently overseas (i.e., very far from my normal library of tools) and my primary machine that would normally act as the data server in the performance test that we're trying to run is failing to boot to Fedora 12 properly. This is a machine that, as of yesterday, was booting fine. However, this morning, very strange portions of the boot process were complaining with messages such as "unexpected 0x0 in rpcbind" and "bad file descriptor" (I don't have the error in front of me - scavenged a windows installation to get onto serverfault). Eventually, the boot hung for a long time at the NFS service and then brought up what looked like the KDE login screen but neither the mouse nor keyboard functioned. In olden days, I would try to get to a point where I could manage to run fsck and pray that the bad sectors would come back into alignment just long enough for me to scrape the critical data off of the machine. However, now that we live in the future, it seems like our options in situations like this should be a little more varied. Is there a way to recover a Fedora 12 installation with bad disk sectors that won't boot properly? For completeness, I am comfortable working with bootable recovery distros-on-CD and such but I don't know which one is likely to work best with modern Fedora. In the absence of guidance, I'm frantically torrenting the Fedora 12 Live CD and DVD, hoping to try rescue mode before tomorrow morning.

    Read the article

  • Grub hangs at "Starting up ..." when USB flash card reader is plugged in (on Ubuntu Hardy)

    - by Laurence Gonsalves
    I have a PC with Ubuntu Hardy installed. The machine boots fine unless my USB flash card reader (one of those N-in-1 readers by MediaGear) is plugged in at startup. If the reader is plugged in, the boot process proceeds as normal until it gets to the screen that says "Starting up ...". At that point it just hangs forever. To work around this I currently leave the reader unplugged when booting, and then plug it back in after I see that Ubuntu is actually starting. This is annoying though, especially when I reboot the machine (typically for updates), forget to unplug the reader, and walk away only to come back hours later to find the machine hung. My guess is that the presence of the reader is confusing Grub about where to find the kernel. The weird thing is that Grub is on the same drive as the kernel I want it to boot so clearly the drive is still readable even when the flash card reader is plugged in. Is there some way I can tell Grub to never go looking on the flash card reader?

    Read the article

  • EFS Remote Encryption

    - by Apoulet
    We have been trying to setup EFS across our domain. Unfortunately Reading/Writing file over network share does not work, we get an "Access Denied" error. Another worrying fact is that I managed to get it working for 1 machine but no other would work. The machines are all Windows 2008R2, running as VM under ESXi host. According to: http://technet.microsoft.com/en-us/library/bb457116.aspx#EHAA We setup the involved machine to be trusted for delegation The user are not restricted and can be trusted for delegation. The users have logged-in on both side and can read/write encrypted files without issues locally. I enabled Kerberos logging in the registry and this is the relevant logs that I get on the machine that has the encrypted files. In order for all certificate that the user possess (Only Key Name changes): Event ID 5058: Audit Success, "Other System Events" Key file operation. Subject: Security ID: {MyDOMAIN}\{MyID} Account Name: {MyID} Account Domain: {MyDOMAIN} Logon ID: 0xbXXXXXXX Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: Not Available. Key Name: {CE885431-9B4F-47C2-8415-2D766B999999} Key Type: User key. Key File Operation Information: File Path: C:\Users\{MyID}\AppData\Roaming\Microsoft\Crypto\RSA\S-1-5-21-4585646465656-260371901-2912106767-1207\66099999999991e891f187e791277da03d_dfe9ecd8-31c4-4b0f-9b57-6fd3cab90760 Operation: Read persisted key from file. Return Code: 0x0[/code] Event ID 5061: Audit Faillure, "System Intergrity" [code]Cryptographic operation. Subject: Security ID: {MyDOMAIN}\{MyID} Account Name: {MyID} Account Domain: {MyDOMAIN} Logon ID: 0xbXXXXXXX Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: RSA Key Name: {CE885431-9B4F-47C2-8415-2D766B999999} Key Type: User key. Cryptographic Operation: Operation: Open Key. Return Code: 0x8009000b Could this be related to this error from the CryptAcquireContext function NTE_BAD_KEY_STATE 0x8009000BL The user password has changed since the private keys were encrypted. The problem is that the users I using at the moment can not change their password.

    Read the article

  • How to configure a trusted connection between IIS 7 and SQL Server 2005?

    - by user1180652
    How do configure a trusted connection between IIS 7 and SQL Server 2005? My webapp was working fine with Windows Authentication enabled in IIS. Now, in order to solve a problem, we need to use a trusted connection. Unfortunately, enabling the trusted connection in the web.config broke the webapp. Oddly enough, when I run this application with trusted connection from my local dev machine (using the Cassini web server) IIS (Windows Server 2008) is running on one machine. The database (SQL Server 2005 but could migrate to 2008) is running on another machine. We are on a Windows domain running AD. All traffic is within our own firewall - no public access. Beyond that, I can't provide much info but I can find it. We're very "compartmentalized" (we have server people, security people, oracle people, SQL Server people, etc.) Thanks! Update 02/14/2012 0902: The webapp is now functional (app no longer broken) but the main issue is still unresolved. Now I have the app's application pool running as a domain account with permissions on the SQL Server box and IIS box. We were using this account to run the application but, and here's the problem, we need to log the real user name that made a change. When using the service account, the name of that service account appeared in the audit tables, making the auditing quite useless. So, not I'm at least running again. The connection string in the web.config is using "Trusted_Connection=True", the appPool is using a domain account with access to both boxes, BUT when I make a change (logged in as me) the name of the service account (appPool identity) is still logged in the audit tables. I also manually granted full permissions to the service account on the webapp folder. What do I need to do in order to log my name, not the service account, in the audit tables? Everything I'm reading says I need to establish a trusted connection between the two servers.

    Read the article

  • How can I trigger the creation of a new CLB file?

    - by Xperimental
    I'm currently having a problem with an application using COM running on Windows Vista. The application runs ok on one machine, but doesn't work on a similar configured machine. Both machines are virtual images originating from the same source image. While searching the registry for causes of this error, I came across the CLBVersion key in HKCR\CLSID which seems to have something to do with COM. The value of the key differs between the two machines (0x6 on the erroneous one, 0xc on the working one). Also there are files containing the same number in their filenames in the %SystemRoot\Registration directories of the machines. They are called R000000000006.clb and R00000000000c.clb respectively. I have already searched the windows event log for anything leading to the creation of those files (I have searched by the creation date of the files). Now a few questions regarding the registry keys and the files: Is it correct, that this is connected to COM? What is the function of the files? What causes the creation of a new "CLBVersion"? Is there a way for me to trigger the creation of a new CLB file? edit: I have now found out, that this has nothing to do with my application error. But I would still be interested in details about the registry key and the files. An installation of Visual Studio 2005 has brought the second machine to the same configuration (0xc in registry and file) as the other one.

    Read the article

  • Errors with Using Webcam

    - by C.G.
    I have been having some issues accessing a webcam from my machine. Sometimes (not always) when I run a program that accesses the device (cheese, guvcview, and code using openCV), I get either of two messages, which lead to the program crashing. The first occurs after running the webcam for some time. libv4l2: error dequeuing buf: No such device VIDIOC_DQBUF: No such device The other will occur without even letting me have a chance to run the webcam. libv4l2: error turning on stream: No space left on device VIDIOC_STREAMON - Unable to start capture: No space left on device Occasionally after getting these errors I will also receive a message saying that no such device can be found for subsequent runs. Other than the times that the "No device found" message appears the webcam appears when I use lsusb. My machine runs Linux Fedora 16, and the webcam is a Logitech C920. I do have ffmpeg installed, and I have been able to run the web camera many times in the past without errors. What is particularly puzzling about these errors is that they just sprung up this past weekend. No new software or hardware has been installed on this machine recently; I haven't changed any settings recently either. It could possibly be a driver issue, but I don't know what could have changed which could lead to this issue. Any attempts at researching this problem has been fruitless as this seems to most commonly occur with multiple webcams. I am only working with one device. I'd appreciate any advice for this problem, as this has become a bit frustrating.

    Read the article

  • Windows 7 remote desktop encryption error every few minutes

    - by rfrankel
    Because of an error in data encryption, this session will now end. This is the error I've been getting more and more frequently over the past few days, to the point that I can't ignore it because it's happening consistently within 5 minutes of connecting - sometimes within a few seconds. Both the remote and local machines are Windows 7 Pro x64. The remote machine is behind a Linksys RV082, and I'm using UPnP to forward a remote port to the correct local port. This setup had been working fine for several months, and I can't think of any recent relevant changes that might have been made. Things I've already tried: Disabling unnecessary components of the network connection on the remote machine, until only IPv4 and Client for Microsoft Networks remain. Disabling TCP large send offload on both the remote and local machines. Confirming that the remote machine is not mentioned anywhere in any DMZ settings on the Linksys router. Confirming that there are no x509-related registry keys screwing things up (this is the suggested fix for a slightly different error anyway). These are the only solutions I've been able to find after about an hour of searching, and most of them apply to XP or Server 2003 in any case. If anyone could suggest something else, it would be much appreciated.

    Read the article

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software?

    Read the article

  • centos 6.3 kvm external ip forwarding to guests

    - by user1111702
    I have a centos 6.3 server with kvm installed. The server has 4 external ips and one NIC. 176.9.xxx.xx1 176.9.xxx.xx2 176.9.xxx.xx3 176.9.xxx.xx4 I use the following configuration ifcfg-eth0 as slave to ifcfg-br0 the configuration in ifcfg-eth0 is DEVICE=eth0 ONBOOT=yes BRIDGE=br0 HWADDR=14:da:e9:b3:8b:99 and in the ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=static BROADCAST=176.9.xxx.xxx IPADDR=176.9.xxx.xx1 NETMASK=255.255.255.0 SCOPE="peer 176.9.xxx.xxx" and I have 3 more aliases for br0 , br0:1 to get the trafic from the second external ip DEVICE=br0:1 IPADDR=176.9.xxx.xx2 NETMASK=255.255.255.248 ONBOOT=yes br0:2 to get the trafic from the third external ip DEVICE=br0:1 IPADDR=176.9.xxx.xx3 NETMASK=255.255.255.248 ONBOOT=yes br0:3 to get the trafic from the second external ip DEVICE=br0:1 IPADDR=176.9.xxx.xx4 NETMASK=255.255.255.248 ONBOOT=yes The above settings work fine and I recieve the trafic from all the external ips. My problem is that I want to pass the trafic from external ip to specific virtual guest on my server. ie trafic that comes from 176.9.xxx.xxx2 must pass to virtual machine 1 176.9.xxx.xxx3 must pass to virtual machine 2 176.9.xxx.xxx4 must pass to virtual machine 3 Can you please help me how to achieve this ? What are the settings on the host and what should I do to the guests. Thank you in advance

    Read the article

  • Backup plan for linux webserver in small business?

    - by radman
    Hi, I am currently in the process of writing a backup plan for the webserver in use by my business. I am very new to this area and have a few ideas about how things should work but am unsure of what tools to use and what sort of restore process is appropriate. I'm looking for something relatively simplistic and it doesn't have to be 100% paranoid just enough to give me a reliable backup. Speed is not of the essence and there is not going to be a live fallback in place. The backup will be onto a single hdd that will be stored onsite (no option for offsite as yet). Backups will be taking place weekly. I am constrained by both time and money which is why I'm aiming for a good enough solution. Is taking an image of the webserver system drive periodically and using that as the backup appropriate? Should I be testing that the backups restore correctly every time that I perform one? This is a bit broad but what setup would you use if you were in my place, given the services I am running? Should I add additonal machines and split the services? Any advice is much appreciated! See below for server details Webserver Platform Linux Ubuntu server Running mail-server svn-server mediawiki wordpress apache-webserver Hardware single 500gb sata drive Architecture Single machine behind router (with firewall) accessible to the internet.

    Read the article

  • Avoid "privacy pitfalls" in Windows and Linux?

    - by Somebody still uses you MS-DOS
    I have a Windows and a Linux machine. In Windows, everytime I visit a site, a lot of cache/history files are created on my machine. I setup my Firefox to don't save anything. ...but Windows saves a lot of "temp" files, some strange files I opened in registry (like video names). Each video I open in VLC is shown in "Last shown videos". In windows, all files opened can be found at "Recent opened files" as well. A lot of these privacy configurations can be tweaked (VLC and "Recent opened files" in Windows) - it's a PITA doing it individually, but it's possible - but there isn't a guide to these "internal" privacy traces that are left on Windows installation. In Linux, I just know there are these problems in app level (like VLC). My question is: is there a complete guide to avoid undesirable traces of what I did/watch/used in my Windows machine? (Delete everytime the PC is restarted, or even avoiding recording these info at all) Is there a website with configuration guides to different types of software? I would like to know about Linux privacy pitfalls as well.

    Read the article

  • Rename Active Directory domain following Windows 2000 -> 2008 migration.

    - by ewwhite
    I'm working with a site that needs an internal DNS domain rename. It currently has a DNS name of domain.abc.com and NT name of ABC. I'm trying to get to a DNS name of abctrading.com and NT name of ABCTRADING. Split DNS would be used. The site originally ran from a single Windows 2000 domain controller hosting AD, file, print, DHCP and DNS services. There was no Exchange system in the environment. The 50 client PCs are all Windows XP with a handful of users using roaming profiles. All users are in a single OU and there are no group policy/GPOs. I'm a Linux engineer, but have been trying to guide another group of consultants to reach a more suitable setup. With the help of this group, we were able to move the single Windows 2000 system to a set of Windows 2008 R2 servers separated into domain controller and file/print systems (virtualized). We are also trying to add an Exchange 2010 system to this mix. The Windows 2000 server was demoted and is no longer in the picture. This is the tricky part, as client wants the domain renamed and the consultants aren't quite sure how to get through it without another 32-40 hours of testing/implementation. THey say that there's considerable risk to do the rename without a completely isolated test environment. However, this rename has to be done before installing Exchange. So we're stuck at this point. I'd like to know what's involved in renaming the domain at this point. We're on Windows Server 2008. The AD is healthy now. Coming from a Linux background, it seems as though there should be a reasonable path to this. Also, since the original domain appears to be a child/subdomain, would that be a problem here. I'd appreciate any guidance.

    Read the article

  • How important is dual-gigabit lan for a super user's home NAS?

    - by Andrew
    Long story short: I'm building my own home server based on Ubuntu with 4 drives in RAID 10. Its primary purpose will be NAS and backup. Would I be making a terrible mistake by building a NAS Server with a single Gigabit NIC? Long story long: I know the absolute max I can get out of a single Gigabit port is 125MB/s, and I want this NAS to be able to handle up to 6 computers accessing files simultaneously, with up to two of them streaming video. With Ubuntu NIC-bonding and the performance of RAID 10, I can theoretically double my throughput and achieve 250MB/s (ok, not really, but it would be faster). The drives have an average read throughput of 83.87MB/s according to Tom's Hardware. The unit itself will be based on the Chenbro ES34069-BK-180 case. With my current hardware choices, it'll have this motherboard with a Core i3 CPU and 8GB of RAM. Overkill, I know, but this server will be doing other things as well (like transcoding video). Unfortunately, the only Mini-ITX boards I can find with dual-gigabit and 6 SATA ports are Intel Atom-based, and I need more processing power than an Atom has to offer. I would love to find a board with 6 SATA ports and two Gigabit LAN ports that supports a Core i3 CPU. So far, my search has come up empty. Thus, my dilemma. Should I hold out for such a board, go with an Atom-based solution, or stick with my current single-gigabit configuration? I know there are consumer NAS units with just one gigabit interface (probably most of them), but I think I will demand a lot more from my server than the average home user. Any advice is appreciated. Thanks.

    Read the article

  • Windows 7 - Intermittently processes will not close when the app closes

    - by Bill Sambrone
    I have a user I am supporting who has the strangest issue. There are 2 problem applications, Word 2010 and a scanning program called ScandallPro. Intermittently (and at least once a day), she will close an app and the underlying process will not close. Both Word 2010 and this scanning software have all the latest updates. There is another user who has the same software that does not have this problem, and has identical hardware. I have formatted and rebuilt the computer for the user who is having the problems. After the rebuild, the machine was fine for a day but the scanning software continues to intermittently keep the process running even after it is closed. This is a problem because she cannot open a new instance of it while the process is still running. There is a boatload of line of business software on this machine, all of which she needs. I believe the Word 2010 issue is due to a misbehaving add-in (there are 2 add-ins, neither of which seem stable), and I think my best bet is to work with the add-in vendor on it. The scanning program staying open is isolated to this 1 user. The only difference between her machine and the other user is that she has Quickbooks, RoboForm, and Adobe Acrobat X Pro. Any ideas of what can be causing this, or other diagnostic steps to try?

    Read the article

  • ProCurve ACL to prevent a subnet from leaving the switch

    - by kce
    I have a single HP ProCurve 2610 in a remote location that is connected in with the rest of the network via SHDSL. There are two Layer-3 networks on this segment. ACLs are setup to deny one subnet (192.0.2.0/24) from ever being able to leave the switch by virtue of being applied to port attached to the upstream connection. The other subnet should be permitted to freely leave the switch. Both subnets are on the same VLAN. Unfortunately SFlow very clearly show broadcast traffic from 192.0.2.0/24 on the upstream connection. ProCurve ACLs are not my strong suit but I feel like I'm missing something very simple here. ip access-list extended "Filter for Camera Network" deny ip 192.0.2.0 0.0.0.255 0.0.0.0 255.255.255.255 log permit ip 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 exit interface 24 name "DSL - UPLINK" access-group "Filter for Camera Network" in exit Unless I am mistaken traffic from 192.0.2.0/24 should be dropped as it crosses the uplink port (int 24) whereas all other traffic will be permited by the following default allow rule. What exactly am I missing here? EDIT: Firstly, why do you have two subnets contained in the same VLAN? Because that's how it was configured by a previous administrator and while it makes conceptual sense that a single subnet is "mapped" to a single VLAN there's no technical constraint that I am aware of that makes this have to be the case. Instead of filtering inbound traffic on your uplink, you should be filtering outbound traffic. The HP2600 series can only filter inbound traffic on interfaces. Should I change my filter to deny any to 192.0.2.0/24?

    Read the article

  • RDP Connection to Windows 7 stays really slow

    - by Pavlo
    I have an Issue with connecting to Windows 7 via RDP. I can open an RDP Session, but regardless of any settings, the responce times are really long. This in particulary is the case when opening a web page in a browser. I've tried IE, Firefox and Google Chrome. I also use RDP connection to a Windows 2008 Server from the same client machine, and the speed is very normal with all features turned on. We have Gigabit Ethernet here. So I think it can not be the client's fault. What concerns Windows 7 Machine, I've tried shutting all the sraphic features off and turning the color levels to 256 colors. Result - the same. If I work locally on the machine - I can not see any lags. What else have I tried: Using old RDP 5 Client from Microsoft Setting network autotuninglevel as seen here Do You have some ideas? Thanks in advance! Update the problem seems to be with rendering window contents. All the window borders and pannes are rendered pretty quickly, but the content shows up very slowly. Also mouse movements are recognised by the Win 7 box only after some period. Are there some hidden settings in the RDP, where one could turn some advanced features off or turn some caching on? I use Bitmap Caching, but this apparently doesn't help.

    Read the article

  • Printing on Remote Desktop session

    - by Arindam Banerjee
    We have to connect a Windows 2008 server using Remote Desktop from Windows XP machine. A Barcode Printer is attached with XP machine and the printer is shared as Local Resource in RDC session to the server. On the server we have to print from an application which prints either to LPT port or shared printer (UNC path). For this I use to configure print pooling combining LPT1 and (Terminal Server) TSxxx port. As I don't know the option to access the Terminal Session printer via UNC path. But I have the following issues - Every time I connect to a remote session, the printer from my local Win XP machine is showing in Printers and Faxes on Win 2008 Server (Terminal Server), but I am not allowed to manage the Win XP printer from Terminal Server to enable pooling. On the server I have to change the security permission every time and then enable print pooling. How can I keep the security permission unchanged? Secondly I created a batch file to enable print pooling. rundll32 printui.dll,PrintUIEntry /Xs /n "Printer (from CLIENT)" Portname "LPT1:,TS005" But every time the printer in terminal session connects in diffrent terminal Session port. Any solution to make the TS port fixed? Help from anyone will be highly appreciated.

    Read the article

< Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >