Search Results

Search found 20946 results on 838 pages for 'at command'.

Page 582/838 | < Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >

  • Xorg configuration file on Debian Testing

    - by nubicurio
    I cannot find the Xorg configuration file on my newly installed Debian on my tablet-pc, so I followed this tutorial http://wiki.debian.org/Xorg and ran the command "Xorg -configure", to which I got the following error messages: (EE) Failed to load module "vmwgfx" (module does not exist, 0) (EE) vmware: Please ignore the above warnings about not being able to load module/driver vmwgfx (++) Using config file: "/root/xorg.conf.new" (==) Using system config directory "/usr/share/X11/xorg.conf.d" FATAL: Module fbcon not found. Number of created screens does not match number of detected devices. Configuration failed. Dose anyone know what this means and how I should proceed? Why is there a warning about vmware, and what is this fbcon module?

    Read the article

  • SFTP over double server hop

    - by josh.trow
    I'm trying to work out a method to allow me to access files on an SFTP server than I cannot access from my local machine. Currently, I have to SSH to a remote server (it is in a certain IP block that the final SFTP server will accept from), then from there SFTP to the destination server. From there, I get the files I am interested in, thereby dropping them onto the middleman server, from which I can get the files either over a Samba share or with a direct scp. I also work in the reverse, where I drop the files onto the middleman, SSH to it then SFTP to the destination and put them into the appropriate folders. My goal is to shorten this. The unfortunate restrictions are that my machine is Windows (I use KiTTy and/or Cygwin) and I cannot modify the middleman server (or destination server) in any way. I am willing to use command line or GUI programs so long as it works and is free. Any ideas?

    Read the article

  • Enlarge partition on SD card

    - by chenwj
    I have followed Cloning an SD card onto a larger SD card to clone a 2G SD card to a 32G SD card, and the file system is ext4. However, on the 32G SD card I only can see 2G space available. Is there a way to maximize it out? Here is the output of fdisk: Command (m for help): p Disk /dev/sdb: 32.0 GB, 32026656768 bytes 64 heads, 32 sectors/track, 30543 cylinders, total 62552064 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e015a Device Boot Start End Blocks Id System /dev/sdb1 * 32 147455 73712 c W95 FAT32 (LBA) /dev/sdb2 147456 3994623 1923584 83 Linux I want to make /dev/sdb2 use up the remaining space. I try resize2fs /dev/sdb after dd, but get message below: $ sudo resize2fs /dev/sdb resize2fs 1.42 (29-Nov-2011) resize2fs: Bad magic number in super-block while trying to open /dev/sdb Couldn't find valid filesystem superblock. Any idea on what I am doing wrong? Thanks.

    Read the article

  • How to Downgrade Razor 3 and fix the issue that CSHTML not work in VS10,12 ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/how-to-downgrade-razor-3-and-fix-the-issue-that.aspxFew days ago I migrate a project to MVC 4 and suddenly I have seen that MVC project’s cshtml file is no longer working. The problem happen because my project is now based on Razor 3 RC and VS12 doesn’t even have support it. (Remember that VS team will ship support in VS update 4). My migration update it to Razor 3 (which is not related to MVC 4, MVC 4 used old version of Razor 2).   So how to fix the problem. Since VS update 4 in development and MVC 3 support exist in both old Version of VS (10,12) then better to migrate back our Razor to old version so we can use our project in VS 10 or 12. If your project have Razor 3 and it seem that Syntax highlighting doesn’t work for you then I suggest you to try this Nuget package https://www.nuget.org/packages/UpgradeMvc3ToMvc4 Remember that this will not be succeed. What you need to do is delete package folder in your project and now open the packages.config remove all entry of package now.   Now Run this command PM> Install-Package UpgradeMvc3ToMvc4 If this is failed then see what thing make error in console. simply remove the reference and try again. Now run it and see this will work.   After run this you will see that WebGrease Dll have a version number issue. Simply update it to version 1.5.2 and now you have ready your project to run it in .net 4. If you do bin deployment then you don’t need to have installed MVC 4 on server either. Remember that MVC 5 is based on .net 4.5 which simply means you can’t run it in VS10. until VS12 update 4 MVC 5 cshtml page will be work as simple html pages (syntax highlighting and intellisense). Thanks for read my post

    Read the article

  • Configuring memcached for a particular scenario

    - by pradeepchhetri
    I have a web application which queries opentsdb server(which in backend using Hbase cluster) for the datapoints of different metrics and using dygraph javascript graphing library, I am plotting those metrics. Since getting all the datapoints of past one day from opentsdb for a particular metric is itself taking nearly 2 seconds, my application which is plotting nearly 25 metrics is becoming very slow. In order to reduce this latency, I am thinking of using memcached module of php5 for caching all the queries. But I have few questions regarding memcached. Is there any way I can configure memcache to keep on updating its cache in the background by running some command line queries after particular interval of time. Is there any way I can configure memcache to always reply for a query using cache instead of first updating its cache because my application just plots datapoints for past one day. Missing out some datapoints is not that critical.

    Read the article

  • "Unmet Dependencies" problem when trying apt-get install

    - by GChorn
    Anytime I try to install python packages using the command: sudo apt-get install python-package I get the following output: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-headers-generic : Depends: linux-headers-3.2.0-36-generic but it is not going to be installed linux-headers-generic-pae : Depends: linux-headers-3.2.0-36-generic-pae but it is not going to be installed linux-image-generic : Depends: linux-image-3.2.0-36-generic but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). This seems to have started when these same three packages showed up in Ubuntu's Update Manager and kicked an error when I tried to install them there. Based on the suggestion in the output above, I tried running: sudo apt-get -f install But this only gave me several instances of the following error: dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-36-generic_3.2.0-36.57_i386.deb (--unpack): unable to create `/lib/modules/3.2.0-36-generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko.dpkg-new' (while processing `./lib/modules/3.2.0-36-generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko'): No space left on device Now maybe I'm way off-base here, but I'm wondering if the error could be coming from the "No space left on device" part? The thing is, I'm running Ubuntu as a VirtualBox VM but I've got it set to dynamically increase its virtual hard drive space as needed, so why am I still getting this error? Here's my output when I use dh -f: Filesystem Size Used Avail Use% Mounted on /dev/sda1 6.9G 5.7G 869M 88% / udev 494M 4.0K 494M 1% /dev tmpfs 201M 784K 200M 1% /run none 5.0M 0 5.0M 0% /run/lock none 501M 76K 501M 1% /run/shm VB_Shared_Folder 466G 271G 195G 59% /media/sf_VB_Shared_Folder When I perform sudo apt-get -f install and the system says, After this operation, 192 MB of additional disk space will be used. Does that mean 192 MB of my virtual machine's current memory, or 192 MB on top of the rest of my free space? As I said, my machine normally dynamically allocates additional memory from the host machine, so I don't see why there would be memory restrictions at all...

    Read the article

  • How to map Ctrl + ',' to greater key( '>') or Ctrl + '.' to less key( '<' ) using xmodmap?

    - by Maxrunner
    So im trying to creating a combination of keys to generate the ISO key for Portuguese layout, the key in question is the <, pressing it normally will generate the '<' character, pressing + shift will generate the ' ' character. So i'm trying to create a combination while using xmodmap, and i want this to work for all programs.I've been searching on Google and came up with this example for Control + P = Up: Control + p = Up arrow example The example for that behaviour is: xmodmap -e "keycode 33 = p P Up" keycode 33 matches the p key, so where does control comes up in that command? regards,

    Read the article

  • how to include screen's session name in hardstatus?

    - by fungusakafungus
    I use different screen sessions for different projects. Starting screen like screen -S project1. Now, I'd like to mention 'project1' in hardstatus line. Session name can be obtained from environment variable STY: STY=13539.project1. But how to get this into screen? I've tried backtick command and %` in hardstatus, but I can't seem to get it right. What I did: .screenrc: hardstatus string '%H:%`' backtick 0 30 30 echo $STY no luck, empty %`. backtick 0 30 30 sessionname still no luck, sessionname: Not found

    Read the article

  • Why is file sharing over internet still working, despite all firewall exceptions for filesharing being disabled?

    - by Triynko
    Every exception in my windows server firewall that starts with "File and Printer Sharing" is disabled (ordered by name, so that includes domain, public (active), and private profiles). The Network and Sharing Center's options for everything except password protected sharing are off. Why would I still be able to access a network share on that server via an address like "\\my.server.com\" over the internet? The firewall is on for all profiles and blocking incoming connections by default. A "netstat -an" command on the server reveals the share connection is occurring over port 445 (SMB). I restarted the client to ensure it was actually re-establishing a new connection successfully. Is the "Password protected sharing: On" option in Network and Sharing Center bypassing the firewall restrictions, or adding some other exception somewhere that I'm missing? EDIT: "Custom" rules are not the problem. It's the "built-in" rules for Terminal Services that was the problem. Can you believe port 445 (File Sharing Port) has to be wide open to the internet to use Terminal Services Licensing?)

    Read the article

  • Monitor Windows Terminal Sessions from Linux/Mac

    - by mhd
    I'm writing some scripts to make remote connections to a Windows 2003 server a bit more user-friendly, and in doing this I want to see who's logged in already. In Windows, I could use qwinsta.exe to do this, even for remote servers. So it is exposed somehow, but I couldn't find a matching command line tool for Unix. Lacking such a tool, I could install an ssh server on the machine and call it remotely, parsing the output or write a small service of my own that would expose this via http, if I don't want full-blown ssh access. Do I have to do this, or is there already a tool for querying terminal services remotely?

    Read the article

  • Screen flickering / scrambling on an Asus UL30A

    - by user55059
    Recently my Laptop screen started to flicker. You can view the phenomena here: YouTube Sometimes the screen is totally scrambled, but most of the time it starts with the Title bar only. It happens inconsistently. My Laptop is Asus UL30A and I'm using Ubuntu 11.10. Output from command: sudo lshw -C display; lsb_release -a; uname -a; xrandr *-display:0 description: VGA compatible controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 07 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:44 memory:fe400000-fe7fffff memory:d0000000-dfffffff ioport:dc00(size=8) *-display:1 UNCLAIMED description: Display controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 07 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:fe800000-fe8fffff LSB Version: core-2.0-ia32:core-2.0-noarch:core-3.0-ia32:core-3.0-noarch:core-3.1-ia32:core-3.1-noarch:core-3.2-ia32:core-3.2-noarch:core-4.0-ia32:core-4.0-noarch Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric Linux steelke 3.0.0-14-generic-pae #23-Ubuntu SMP Mon Nov 21 22:07:10 UTC 2011 i686 i686 i386 GNU/Linux Screen 0: minimum 320 x 200, current 1366 x 768, maximum 8192 x 8192 LVDS1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 293mm x 164mm 1366x768 60.0*+ 1360x768 59.8 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 disconnected (normal left inverted right x axis y axis) DP1 disconnected (normal left inverted right x axis y axis) DP2 disconnected (normal left inverted right x axis y axis) I already rolled back the kernel to 3.0.0-14 instead of 3.0.0-17 as mentioned in this post, but without result. I guess the problem is related to the driver, because I don't see similar behaviour in the BIOS Setup. Any tips or help is welcome.

    Read the article

  • Execute script before shutting down in Ubuntu

    - by juanefren
    When I shut down my computer I want to show some pending tasks that I have to do before leaving the office... I did a local application to manage those tasks, so basically I just want to run a command, and shut down after I kill the app executed. I have already tried with these options: * /etc/gdm/PostSession/Default -- this works only when I select LogOut option instead Shutdown. * /etc/rc0.d/K01mycustomscript -- execute script after X is killed * $HOME/.bash_logout -- This looks like does nothing. * ./app-to-run && sudo shutdown -h now -- Don't like it for 2 reasons, prompts for sudo password, and can't use my laptop shutdown button. I am using Ubuntu 10.04

    Read the article

  • How can I restore GRUB without a live CD?

    - by Looterguf
    I realize that this is a duplicate of a question asked before, but in that question the asker managed to find his live CD and no real answer appeared, thus I am re-asking it. I managed to screw up my GRUB by deleting two linux partitions on my hard drive from windows. After this, GRUB gives the error "partition not found", and gives me the grub-rescue prompt. The only command I have found to work in this is 'ls', which spits out my partitions. I would use the live CD fix, but I am in India, and all my live CDs are back home in the US... What I've got is an internet connection, a 4GB flash drive with Flow OS installed (which I am currently using but can wipe if need be), and a working laptop that I can borrow. What should I do?

    Read the article

  • Exceptions from automongobackup, yet script completes

    - by chakram88
    I am using automongobackup to, well, automate the backups of mongodb. output from the script (to STDERR) has the following exceptions (but the backup completes, and the dump files are created) ###### WARNING ###### STDERR written to during mongodump execution. The backup probably succeeded, as mongodump sometimes writes to STDERR, but you may wish to scan the error log below: exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: HostAndPort: bad port # exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed I know that the Host & Port are correct. If I run mongodump --host=127.0.0.1:27017 --journal (which is the effective command from automongobackup based on the options set and my reading of the src code) everything runs clean without any error reporting and the dump files are created as expected. Why would automongobackup report connection errors, even tho it does create the dump files, yet a straight call to mongodump does not? Debian 6.0 Lenny (from Linode image: Latest 3.2 (3.2.1-x86_64-linode23)) AutoMongoBackup VER 0.9 mongodb v 2.0.2

    Read the article

  • centos 5.6, virtual on Vsphere

    - by Glasnhost
    suddenly my virtual centos server (5.6 on VMWare-VSphere) is not working... It started with the url not responding, nor the ip. (NO HTTP response, no ping). When I entered on the server via ssh to start troubleshooting I noticed that most of commands don't work anymore: top- machine hangs (it's not slow anyway) ps - machine hangs (funny enough apache server and web app are running and sending me emails) on some directory ls -l - machine hangs after first file, if ls l.txt the file show only the first file "more" command, also hangs on some file. So there are very little things I can try. I recovered my virtual machine from yesterday and before yesterday, and they show the same behaviour, it hangs on commands (but yesterday they were working). There is no firewall on the machine, there is on the host though. I can connect with ftp but I can't download files nor list directory apart the user top directory... Working hard right now, any idea appreciated

    Read the article

  • How can old DOS utilities be incompatible with x64?

    - by Dims
    I found that ARJ.EXE archiver does not run under Windows 7 x64. Bot how can it be? The task of the achiver is to do some very basic file IO. How can it be so? Also I found that apparently no any old command line utility does not run under Windows 7 x64. Also I found that there is no any compatibility option for this case. Is this a very great Microsoft sabotage? Are there any ways to overcome this? Thanks

    Read the article

  • Would there be any problems with DEP turned off?

    - by IneedHelp
    I recently moved to a fresh Windows 8 x64 system and I learned that my favourite firewall (JPF - Jetico Personal Firewall) doesn't get along with Win8x64 (CRITICAL_STRUCTURE_CORRUPTION errors), but I can not do without JPF, so I kind of tried everything I could think of (test mode, debugging, various system changes), but I was still getting blue screens because of the firewall driver/software. I know for sure that it is the firewall that is causing the problems because I get blue screens as soon as I install it and they stop when I uninstall it. I Also tested it thoroughly on virtual computers. Anyway, I have discovered that by completely turning DEP off by using this command: bcdedit.exe /set {current} nx AlwaysOff the firewall would not cause blue screens anymore. So my question is, what could go wrong with DEP completely turned off? Note: I do not care much about hardware/windows security, I keep myself secured by using sandboxes and virtual computers (and I also have backups), so I'm not concerned with viruses and root kits or whatever people are freaking out about.

    Read the article

  • rsync assigns deny permission

    - by user773478
    Currently a script is used to copy files using rsync (version 2.6.9 protocol version 29) from Linux/Unix servers to W2K3 server using very basic command such as "rsync -v source_server::share_name/file_name /cygdrive///file_name" The script further makes copy of this downloaded file for other purposes. This is part of a larger middleware that is being moved to new hardware on W2K8R2 Second part of making copy of the file does not work using more recent rsync client version 3.0.7 protocol version 30 (shows up as cwRsync in add/remove programs) Reason being rsync assigns special permissions to file that includes deny. The user (service account) which downloads the file is in local admin group. The file can be copied elsewhere using rsync. It can be deleted. But cannot be opened or copied locally by same user as deny permission supersedes.

    Read the article

  • Can't burn 8.1G iso onto 8.4GB DVD - "Media does not have enough free space"

    - by Max Williams
    I'm trying to burn a dvd on a mac with an external (firewire-connected) dvd drive. I'm checking the size of the iso thus: DVD-4:dvd_files macbook$ ls -l /tmp/hybrid.iso -rw-r--r-- 1 macbook wheel 8700884992 Aug 22 10:57 /tmp/hybrid.iso DVD-4:dvd_files macbook$ ls -lh /tmp/hybrid.iso -rw-r--r-- 1 macbook wheel 8.1G Aug 22 10:57 /tmp/hybrid.iso The "human-readable" size is 8.1 Gig but when i try to burn, onto an 8.4G dual-layer dvd, it says "Media does not have enough free space" The definition of a "Gigabyte" according to Wikipedia is 1 billion bytes, so the iso size should actually be 8.7 Gig according to this definition, in which case the disk definitely isn't big enough, and it's just that the -h option to ls is misleading. Is the discrepancy just due to the ls command using a different definition of "G" (eg 1024 Meg aka 1.07 Gig? This comes out as 8.103 which fits what ls is displaying)

    Read the article

  • Information about a file or directory

    - by Tim
    In Linux, the information about a file or directory is stored in its inode. I was wondering what is the data structure for information about a file or directory in Windows 7? How to get the information about a file or directory in Linux and in Windows 7, in terminal and command line window? Is the owner of a file or directory always its creator? Will it be able to change? Is there a creation timestamp for a file in Linux and in Windows 7? How to get it? Thanks and regards!

    Read the article

  • 2 Nics - Same Subnet..Route backup traffic through one of them

    - by Matthewhall58
    I have a windows server and a centos linux server. I want the nightly backup file (targz) that copies to the windows machine to use a different nic so that the main nic is not burdenend with moving the large file. Each server has 2 Nics in it. the network is 10.173.10.0 mask 255.255.255.192 Centos Linux Box: Eth0 is configured with 10.173.10.80 mask 255.255.255.192 gw 10.173.10.65 Eth 1 is configured with 10.173.10.71 mask 255.255.255.192 gw 10.173.10.65 Windows Box Eth 0 is 10.173.10.72 mask 255.255.255.192 gw 10.173.10.65 Eth 1 is 10.173.10.70 mask 255.255.255.192 gw 10.173.10.65 I can ping each machine from each machine. On the linux machine I use the command route add -host 10.173.10.70 dev eth1 but then when i ping 10.173.10.70 it is unreachable..... WHY?

    Read the article

  • Software Center does not load

    - by eim
    I'm having problems with opening my Software center and it just shuts off after loading a few seconds. I can't even get it to the main page of the Software Center. I did try to follow these commands but of no avail: sudo apt-get purge software-center sudo apt-get update sudo apt-get install software-center Instead, I get an error after entering the first command: eim@eim-VAIO:~$ sudo apt-get purge software-cente Reading package lists... Error! E: Encountered a section with no Package: header E: **Problem with MergeList** /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_precise-security_universe_i18n_Translation-en E: The package lists or status file could not be parsed or opened. I tried doing this aswell: Run : cd ~/.cache; rm -r software-center (nothing happened) And this: Add /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1 to the Startup applications error message: eim@eim-VAIO:~$ /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1 Gtk-Message: Not loading module "atk-bridge": The functionality is provided by GTK natively. Please try to not load it. ** (polkit-gnome-authentication-agent-1:3563): WARNING **: Unable to register authentication agent: GDBus.Error:org.freedesktop.PolicyKit1.Error.Failed: An authentication agent already exists for the given subject Cannot register authentication agent: GDBus.Error:org.freedesktop.PolicyKit1.Error.Failed: An authentication agent already exists for the given subject I think I've done all the possible fix to this problem as suggested on my research. But I can't seem to get this work. Can someone please help? NOTE: Okay... Guess I just found the solution to my problem. I'll just post the answer here since I can't answer my own question yet. Open terminal: sudo rm /var/lib/apt/lists/* -vf sudo apt-get update Now I can open my Software Center! I found the answer here: How do I fix a "Problem with MergeList" error when trying to do an update?

    Read the article

  • How do I automatically start Clamz with AMZ files for Amazon MP3 downloads?

    - by Takkat
    Chromium can open downloaded files with the default application (e.g. PDF in Evince). In my setup a downloaded .AMZ (for Amazon MP3) always opened with Gedit. However I would like to have all downloaded .amz files to autromatically open with Clamz, a command line tool for downloading that works like a charm. As in Nautilus my .amz files were associated to open with Gedit too I thought it was a good idea to add a clamz.desktop file in ~/.local/share/applications (according to this answer) [Desktop Entry] Encoding=UTF-8 Name=Clamz Comment=Open AMZ files for Amazon MP3 download Exec=/usr/bin/clamz %u Terminal=True Type=Application Icon= Categories=Application; StartupNotify=true MimeType=audio/x-amzxml; NoDisplay=true This lets me choose Clamz as default application in Nautilus. But when opening an .amz file in Nautilus it still does not open with Clamz as expected but is treated as an executable text file instead (note that the executable bit is not set!). Is there any other way to make Chromium or Nautilus always open an .amz file with Clamz? Did I miss to change setting in another place?

    Read the article

  • Handling commands or events that wait for an action to be completed afterwards

    - by virulent
    Say you have two events: Action1 and Action2. When you receive Action1, you want to store some arbitrary data to be used the next time Action2 rolls around. Optimally, Action1 is normally a command however it can also be other events. The idea is still the same. The current way I am implementing this is by storing state and then simply checking when Action2 is called if that specific state is there. This is obviously a bit messy and leads to a lot of redundant code. Here is an example of how I am doing that, in pseudocode form (and broken down quite a bit, obviously): void onAction1(event) { Player = event.getPlayer() Player.addState("my_action_to_do") } void onAction2(event) { Player = event.getPlayer() if not Player.hasState("my_action_to_do") { return } // Do something } When doing this for a lot of other actions it gets somewhat ugly and I wanted to know if there is something I can do to improve upon it. I was thinking of something like this, which wouldn't require passing data around, but is this also not the right direction? void onAction1(event) { Player = event.getPlayer() Player.onAction2(new Runnable() { public void run() { // Do something } }) } If one wanted to take it even further, could you not simply do this? void onPlayerEnter(event) { // When they join the server Player = event.getPlayer() Player.onAction1(new Runnable() { public void run() { // Now wait for action 2 Player.onAction2(new Runnable() { // Do something }) } }, true) // TRUE would be to repeat the event, // not remove it after it is called. } Any input would be wonderful.

    Read the article

  • Do you have to recreate workspaces after upgrading a TFS 2008 server to TFS 2010?

    - by Clara Oscura
    I am just reposting this thread from a MSDN forum since it seems to be unavailable. It was very useful when I was having trouble with my folder mappings after migrating to TFS 2010. Question: I opened VS2008 and connected it to the upgraded 2010 TFS server.  Upon clicking any of our Team Projects in source control explorer I get "Team Foundation Error - The workspace MYWORKSPACE;DOMAIN\MYUsername already exists on computer MYPCNAME." Answer: The same local paths on your machine are mapped to 2 different workspaces, one on the preupgrade server and one on the postupgrade server.  It's not safe to have multiple workspaces on different servers mapped to the same local paths b/c you could pend some changes while connected to one server, and the other server would have no idea what you did.  You should either delete your conflicting workspaces from one of the servers (if you don't need them on both), or test the new TFS instance from a new workspace (on different machine). If you want to test an existing production workspace on both servers, then yes, you will have to mess around with the workspace cache. You don’t have to delete the entire cache, you just need to run "tf workspaces /remove:* /server:<serverurl>" to clear the cached workspaces from a server (the command won't delete the workspaces), and possibly "tf workspaces /server:<server>" to refresh the workspace cache for a given server.  You will also have to do back up and restore the workspace before switching servers or your local files could be inconsistent. From the “Microsoft Visual Studio Team Foundation Server 2010 Beta 1” forum (not available anymore?) Technorati Tags: TFS 2010,TFS Workspaces,Team System,Team Foundation Server 2010

    Read the article

< Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >